You are viewing a plain text version of this content. The canonical link for it is here.
Posted to mapreduce-user@hadoop.apache.org by nagarjuna kanamarlapudi <na...@gmail.com> on 2013/02/20 16:36:38 UTC

In Compatible clusterIDs

Hi,

I am trying to setup single node cluster of hadop 2.0.*

When trying to start datanode I got the following error. Could anyone help
me out

Block pool BP-1894309265-124.123.215.187-1361374377471 (storage id
DS-1175433225-124.123.215.187-50010-1361374235895) service to nagarjuna/
124.123.215.187:9000
java.io.IOException: Incompatible clusterIDs in
/Users/nagarjunak/Documents/hadoop-install/hadoop-2.0.3-alpha/tmp_20/dfs/data:
namenode clusterID = CID-800b7eb1-7a83-4649-86b7-617913e82ad8; datanode
clusterID = CID-1740b490-8413-451c-926f-2f0676b217ec
        at
org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:391)
        at
org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:191)
        at
org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:219)
        at
org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:850)
        at
org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:821)
        at
org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:280)
        at
org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:222)
        at
org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:664)
        at java.lang.Thread.run(Thread.java:680)
2013-02-20 21:03:39,856 WARN
org.apache.hadoop.hdfs.server.datanode.DataNode: Ending block pool service
for: Block pool BP-1894309265-124.123.215.187-1361374377471 (storage id
DS-1175433225-124.123.215.187-50010-1361374235895) service to nagarjuna/
124.123.215.187:9000
2013-02-20 21:03:39,958 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Removed Block pool
BP-1894309265-124.123.215.187-1361374377471 (storage id
DS-1175433225-124.123.215.187-50010-1361374235895)
2013-02-20 21:03:41,959 WARN
org.apache.hadoop.hdfs.server.datanode.DataNode: Exiting Datanode
2013-02-20 21:03:41,961 INFO org.apache.hadoop.util.ExitUtil: Exiting with
status 0
2013-02-20 21:03:41,963 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down DataNode at nagarjuna/124.123.215.187
************************************************************/

Re: In Compatible clusterIDs

Posted by Alex Current <ac...@gmail.com>.
Have you installed Hadoop on this node before?  If so, did you clean out
all of your old data dirs?


On Wed, Feb 20, 2013 at 4:41 PM, nagarjuna kanamarlapudi <
nagarjuna.kanamarlapudi@gmail.com> wrote:

> /etc/hosts
>
> 127.0.0.1       nagarjuna
> 255.255.255.255 broadcasthost
> ::1             localhost
> fe80::1%lo0     localhost
>
> and all configuration files pointing to   nagarjuuna  and not localhost.
>  Gave me the above error
>
> 127.0.0.1       localhost
> 127.0.0.1       nagarjuna
> 255.255.255.255 broadcasthost
> ::1             localhost
> fe80::1%lo0     localhost
>
>
> and all configuration files pointing to localhost and not nagarjuna, I am
> able to successsfully start the cluster.
>
>
> Does it have something to do with password less ssh ?
>
>
>
> On Thu, Feb 21, 2013 at 1:19 AM, Vijay Thakorlal <vi...@hotmail.com>wrote:
>
>> Hi Nagarjuna,****
>>
>> ** **
>>
>> What’s is in your /etc/hosts file? I think the line in logs where it says
>> “DataNodeRegistration(0.0.0.0 [….]”, should be the hostname or IP of the
>> datanode (124.123.215.187 since you said it’s a pseudo-distributed setup)
>> and not 0.0.0.0.****
>>
>> ** **
>>
>> By the way are you using the dfs.hosts parameter for specifying the
>> datanodes that can connect to the namenode?****
>>
>> ** **
>>
>> Vijay****
>>
>> ** **
>>
>> *From:* nagarjuna kanamarlapudi [mailto:nagarjuna.kanamarlapudi@gmail.com]
>>
>> *Sent:* 20 February 2013 15:52
>> *To:* user@hadoop.apache.org
>> *Subject:* Re: In Compatible clusterIDs****
>>
>> ** **
>>
>> ** **
>>
>> Hi Jean Marc,****
>>
>> ** **
>>
>> Yes, this is the cluster I am trying  to create and then will scale up.**
>> **
>>
>> ** **
>>
>> As per your suggestion I deleted the folder
>> /Users/nagarjunak/Documents/hadoop-install/hadoop-2.0.3-alpha/tmp_20 an
>> formatted the cluster.****
>>
>> ** **
>>
>> ** **
>>
>> Now I get the following error.****
>>
>> ** **
>>
>> ** **
>>
>> 2013-02-20 21:17:25,668 FATAL
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for
>> block pool Block pool BP-811644675-124.123.215.187-1361375214801 (storage
>> id DS-1515823288-124.123.215.187-50010-1361375245435) service to nagarjuna/
>> 124.123.215.187:9000****
>>
>> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.protocol.DisallowedDatanodeException):
>> Datanode denied communication with namenode: DatanodeRegistration(0.0.0.0,
>> storageID=DS-1515823288-124.123.215.187-50010-1361375245435,
>> infoPort=50075, ipcPort=50020,
>> storageInfo=lv=-40;cid=CID-723b02a7-3441-41b5-8045-2a45a9cf96b0;nsid=1805451571;c=0)
>> ****
>>
>>         at
>> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.registerDatanode(DatanodeManager.java:629)
>> ****
>>
>>         at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.registerDatanode(FSNamesystem.java:3459)
>> ****
>>
>>         at
>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.registerDatanode(NameNodeRpcServer.java:881)
>> ****
>>
>>         at
>> org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolServerSideTranslatorPB.registerDatanode(DatanodeProtocolServerSideTranslatorPB.java:90)
>> ****
>>
>>         at
>> org.apache.hadoop.hdfs.protocol.proto.DatanodeProtocolProtos$DatanodeProtocolService$2.callBlockingMethod(DatanodeProtocolProtos.java:18295)
>> ****
>>
>>         at org.apache.hadoop.ipc.Protob****
>>
>> ** **
>>
>> ** **
>>
>> On Wed, Feb 20, 2013 at 9:10 PM, Jean-Marc Spaggiari <
>> jean-marc@spaggiari.org> wrote:****
>>
>> Hi Nagarjuna,
>>
>> Is it a test cluster? Do you have another cluster running close-by?
>> Also, is it your first try?
>>
>> It seems there is some previous data in the dfs directory which is not
>> in sync with the last installation.
>>
>> Maybe you can remove the content of
>> /Users/nagarjunak/Documents/hadoop-install/hadoop-2.0.3-alpha/tmp_20
>> if it's not usefull for you, reformat your node and restart it?
>>
>> JM
>>
>> 2013/2/20, nagarjuna kanamarlapudi <na...@gmail.com>:**
>> **
>>
>> > Hi,
>> >
>> > I am trying to setup single node cluster of hadop 2.0.*
>> >
>> > When trying to start datanode I got the following error. Could anyone
>> help
>> > me out
>> >
>> > Block pool BP-1894309265-124.123.215.187-1361374377471 (storage id
>> > DS-1175433225-124.123.215.187-50010-1361374235895) service to nagarjuna/
>> > 124.123.215.187:9000
>> > java.io.IOException: Incompatible clusterIDs in
>> >
>> /Users/nagarjunak/Documents/hadoop-install/hadoop-2.0.3-alpha/tmp_20/dfs/data:
>> > namenode clusterID = CID-800b7eb1-7a83-4649-86b7-617913e82ad8; datanode
>> > clusterID = CID-1740b490-8413-451c-926f-2f0676b217ec
>> >         at
>> >
>> org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:391)
>> >         at
>> >
>> org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:191)
>> >         at
>> >
>> org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:219)
>> >         at
>> >
>> org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:850)
>> >         at
>> >
>> org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:821)
>> >         at
>> >
>> org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:280)
>> >         at
>> >
>> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:222)
>> >         at
>> >
>> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:664)
>> >         at java.lang.Thread.run(Thread.java:680)
>> > 2013-02-20 21:03:39,856 WARN
>> > org.apache.hadoop.hdfs.server.datanode.DataNode: Ending block pool
>> service
>> > for: Block pool BP-1894309265-124.123.215.187-1361374377471 (storage id
>> > DS-1175433225-124.123.215.187-50010-1361374235895) service to nagarjuna/
>> > 124.123.215.187:9000
>> > 2013-02-20 21:03:39,958 INFO
>> > org.apache.hadoop.hdfs.server.datanode.DataNode: Removed Block pool
>> > BP-1894309265-124.123.215.187-1361374377471 (storage id
>> > DS-1175433225-124.123.215.187-50010-1361374235895)
>> > 2013-02-20 21:03:41,959 WARN
>> > org.apache.hadoop.hdfs.server.datanode.DataNode: Exiting Datanode
>> > 2013-02-20 21:03:41,961 INFO org.apache.hadoop.util.ExitUtil: Exiting
>> with
>> > status 0
>> > 2013-02-20 21:03:41,963 INFO
>> > org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
>> > /************************************************************
>> > SHUTDOWN_MSG: Shutting down DataNode at nagarjuna/124.123.215.187
>> > ************************************************************/
>> >****
>>
>> ** **
>>
>
>

Re: In Compatible clusterIDs

Posted by Alex Current <ac...@gmail.com>.
Have you installed Hadoop on this node before?  If so, did you clean out
all of your old data dirs?


On Wed, Feb 20, 2013 at 4:41 PM, nagarjuna kanamarlapudi <
nagarjuna.kanamarlapudi@gmail.com> wrote:

> /etc/hosts
>
> 127.0.0.1       nagarjuna
> 255.255.255.255 broadcasthost
> ::1             localhost
> fe80::1%lo0     localhost
>
> and all configuration files pointing to   nagarjuuna  and not localhost.
>  Gave me the above error
>
> 127.0.0.1       localhost
> 127.0.0.1       nagarjuna
> 255.255.255.255 broadcasthost
> ::1             localhost
> fe80::1%lo0     localhost
>
>
> and all configuration files pointing to localhost and not nagarjuna, I am
> able to successsfully start the cluster.
>
>
> Does it have something to do with password less ssh ?
>
>
>
> On Thu, Feb 21, 2013 at 1:19 AM, Vijay Thakorlal <vi...@hotmail.com>wrote:
>
>> Hi Nagarjuna,****
>>
>> ** **
>>
>> What’s is in your /etc/hosts file? I think the line in logs where it says
>> “DataNodeRegistration(0.0.0.0 [….]”, should be the hostname or IP of the
>> datanode (124.123.215.187 since you said it’s a pseudo-distributed setup)
>> and not 0.0.0.0.****
>>
>> ** **
>>
>> By the way are you using the dfs.hosts parameter for specifying the
>> datanodes that can connect to the namenode?****
>>
>> ** **
>>
>> Vijay****
>>
>> ** **
>>
>> *From:* nagarjuna kanamarlapudi [mailto:nagarjuna.kanamarlapudi@gmail.com]
>>
>> *Sent:* 20 February 2013 15:52
>> *To:* user@hadoop.apache.org
>> *Subject:* Re: In Compatible clusterIDs****
>>
>> ** **
>>
>> ** **
>>
>> Hi Jean Marc,****
>>
>> ** **
>>
>> Yes, this is the cluster I am trying  to create and then will scale up.**
>> **
>>
>> ** **
>>
>> As per your suggestion I deleted the folder
>> /Users/nagarjunak/Documents/hadoop-install/hadoop-2.0.3-alpha/tmp_20 an
>> formatted the cluster.****
>>
>> ** **
>>
>> ** **
>>
>> Now I get the following error.****
>>
>> ** **
>>
>> ** **
>>
>> 2013-02-20 21:17:25,668 FATAL
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for
>> block pool Block pool BP-811644675-124.123.215.187-1361375214801 (storage
>> id DS-1515823288-124.123.215.187-50010-1361375245435) service to nagarjuna/
>> 124.123.215.187:9000****
>>
>> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.protocol.DisallowedDatanodeException):
>> Datanode denied communication with namenode: DatanodeRegistration(0.0.0.0,
>> storageID=DS-1515823288-124.123.215.187-50010-1361375245435,
>> infoPort=50075, ipcPort=50020,
>> storageInfo=lv=-40;cid=CID-723b02a7-3441-41b5-8045-2a45a9cf96b0;nsid=1805451571;c=0)
>> ****
>>
>>         at
>> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.registerDatanode(DatanodeManager.java:629)
>> ****
>>
>>         at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.registerDatanode(FSNamesystem.java:3459)
>> ****
>>
>>         at
>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.registerDatanode(NameNodeRpcServer.java:881)
>> ****
>>
>>         at
>> org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolServerSideTranslatorPB.registerDatanode(DatanodeProtocolServerSideTranslatorPB.java:90)
>> ****
>>
>>         at
>> org.apache.hadoop.hdfs.protocol.proto.DatanodeProtocolProtos$DatanodeProtocolService$2.callBlockingMethod(DatanodeProtocolProtos.java:18295)
>> ****
>>
>>         at org.apache.hadoop.ipc.Protob****
>>
>> ** **
>>
>> ** **
>>
>> On Wed, Feb 20, 2013 at 9:10 PM, Jean-Marc Spaggiari <
>> jean-marc@spaggiari.org> wrote:****
>>
>> Hi Nagarjuna,
>>
>> Is it a test cluster? Do you have another cluster running close-by?
>> Also, is it your first try?
>>
>> It seems there is some previous data in the dfs directory which is not
>> in sync with the last installation.
>>
>> Maybe you can remove the content of
>> /Users/nagarjunak/Documents/hadoop-install/hadoop-2.0.3-alpha/tmp_20
>> if it's not usefull for you, reformat your node and restart it?
>>
>> JM
>>
>> 2013/2/20, nagarjuna kanamarlapudi <na...@gmail.com>:**
>> **
>>
>> > Hi,
>> >
>> > I am trying to setup single node cluster of hadop 2.0.*
>> >
>> > When trying to start datanode I got the following error. Could anyone
>> help
>> > me out
>> >
>> > Block pool BP-1894309265-124.123.215.187-1361374377471 (storage id
>> > DS-1175433225-124.123.215.187-50010-1361374235895) service to nagarjuna/
>> > 124.123.215.187:9000
>> > java.io.IOException: Incompatible clusterIDs in
>> >
>> /Users/nagarjunak/Documents/hadoop-install/hadoop-2.0.3-alpha/tmp_20/dfs/data:
>> > namenode clusterID = CID-800b7eb1-7a83-4649-86b7-617913e82ad8; datanode
>> > clusterID = CID-1740b490-8413-451c-926f-2f0676b217ec
>> >         at
>> >
>> org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:391)
>> >         at
>> >
>> org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:191)
>> >         at
>> >
>> org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:219)
>> >         at
>> >
>> org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:850)
>> >         at
>> >
>> org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:821)
>> >         at
>> >
>> org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:280)
>> >         at
>> >
>> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:222)
>> >         at
>> >
>> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:664)
>> >         at java.lang.Thread.run(Thread.java:680)
>> > 2013-02-20 21:03:39,856 WARN
>> > org.apache.hadoop.hdfs.server.datanode.DataNode: Ending block pool
>> service
>> > for: Block pool BP-1894309265-124.123.215.187-1361374377471 (storage id
>> > DS-1175433225-124.123.215.187-50010-1361374235895) service to nagarjuna/
>> > 124.123.215.187:9000
>> > 2013-02-20 21:03:39,958 INFO
>> > org.apache.hadoop.hdfs.server.datanode.DataNode: Removed Block pool
>> > BP-1894309265-124.123.215.187-1361374377471 (storage id
>> > DS-1175433225-124.123.215.187-50010-1361374235895)
>> > 2013-02-20 21:03:41,959 WARN
>> > org.apache.hadoop.hdfs.server.datanode.DataNode: Exiting Datanode
>> > 2013-02-20 21:03:41,961 INFO org.apache.hadoop.util.ExitUtil: Exiting
>> with
>> > status 0
>> > 2013-02-20 21:03:41,963 INFO
>> > org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
>> > /************************************************************
>> > SHUTDOWN_MSG: Shutting down DataNode at nagarjuna/124.123.215.187
>> > ************************************************************/
>> >****
>>
>> ** **
>>
>
>

Re: In Compatible clusterIDs

Posted by Alex Current <ac...@gmail.com>.
Have you installed Hadoop on this node before?  If so, did you clean out
all of your old data dirs?


On Wed, Feb 20, 2013 at 4:41 PM, nagarjuna kanamarlapudi <
nagarjuna.kanamarlapudi@gmail.com> wrote:

> /etc/hosts
>
> 127.0.0.1       nagarjuna
> 255.255.255.255 broadcasthost
> ::1             localhost
> fe80::1%lo0     localhost
>
> and all configuration files pointing to   nagarjuuna  and not localhost.
>  Gave me the above error
>
> 127.0.0.1       localhost
> 127.0.0.1       nagarjuna
> 255.255.255.255 broadcasthost
> ::1             localhost
> fe80::1%lo0     localhost
>
>
> and all configuration files pointing to localhost and not nagarjuna, I am
> able to successsfully start the cluster.
>
>
> Does it have something to do with password less ssh ?
>
>
>
> On Thu, Feb 21, 2013 at 1:19 AM, Vijay Thakorlal <vi...@hotmail.com>wrote:
>
>> Hi Nagarjuna,****
>>
>> ** **
>>
>> What’s is in your /etc/hosts file? I think the line in logs where it says
>> “DataNodeRegistration(0.0.0.0 [….]”, should be the hostname or IP of the
>> datanode (124.123.215.187 since you said it’s a pseudo-distributed setup)
>> and not 0.0.0.0.****
>>
>> ** **
>>
>> By the way are you using the dfs.hosts parameter for specifying the
>> datanodes that can connect to the namenode?****
>>
>> ** **
>>
>> Vijay****
>>
>> ** **
>>
>> *From:* nagarjuna kanamarlapudi [mailto:nagarjuna.kanamarlapudi@gmail.com]
>>
>> *Sent:* 20 February 2013 15:52
>> *To:* user@hadoop.apache.org
>> *Subject:* Re: In Compatible clusterIDs****
>>
>> ** **
>>
>> ** **
>>
>> Hi Jean Marc,****
>>
>> ** **
>>
>> Yes, this is the cluster I am trying  to create and then will scale up.**
>> **
>>
>> ** **
>>
>> As per your suggestion I deleted the folder
>> /Users/nagarjunak/Documents/hadoop-install/hadoop-2.0.3-alpha/tmp_20 an
>> formatted the cluster.****
>>
>> ** **
>>
>> ** **
>>
>> Now I get the following error.****
>>
>> ** **
>>
>> ** **
>>
>> 2013-02-20 21:17:25,668 FATAL
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for
>> block pool Block pool BP-811644675-124.123.215.187-1361375214801 (storage
>> id DS-1515823288-124.123.215.187-50010-1361375245435) service to nagarjuna/
>> 124.123.215.187:9000****
>>
>> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.protocol.DisallowedDatanodeException):
>> Datanode denied communication with namenode: DatanodeRegistration(0.0.0.0,
>> storageID=DS-1515823288-124.123.215.187-50010-1361375245435,
>> infoPort=50075, ipcPort=50020,
>> storageInfo=lv=-40;cid=CID-723b02a7-3441-41b5-8045-2a45a9cf96b0;nsid=1805451571;c=0)
>> ****
>>
>>         at
>> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.registerDatanode(DatanodeManager.java:629)
>> ****
>>
>>         at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.registerDatanode(FSNamesystem.java:3459)
>> ****
>>
>>         at
>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.registerDatanode(NameNodeRpcServer.java:881)
>> ****
>>
>>         at
>> org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolServerSideTranslatorPB.registerDatanode(DatanodeProtocolServerSideTranslatorPB.java:90)
>> ****
>>
>>         at
>> org.apache.hadoop.hdfs.protocol.proto.DatanodeProtocolProtos$DatanodeProtocolService$2.callBlockingMethod(DatanodeProtocolProtos.java:18295)
>> ****
>>
>>         at org.apache.hadoop.ipc.Protob****
>>
>> ** **
>>
>> ** **
>>
>> On Wed, Feb 20, 2013 at 9:10 PM, Jean-Marc Spaggiari <
>> jean-marc@spaggiari.org> wrote:****
>>
>> Hi Nagarjuna,
>>
>> Is it a test cluster? Do you have another cluster running close-by?
>> Also, is it your first try?
>>
>> It seems there is some previous data in the dfs directory which is not
>> in sync with the last installation.
>>
>> Maybe you can remove the content of
>> /Users/nagarjunak/Documents/hadoop-install/hadoop-2.0.3-alpha/tmp_20
>> if it's not usefull for you, reformat your node and restart it?
>>
>> JM
>>
>> 2013/2/20, nagarjuna kanamarlapudi <na...@gmail.com>:**
>> **
>>
>> > Hi,
>> >
>> > I am trying to setup single node cluster of hadop 2.0.*
>> >
>> > When trying to start datanode I got the following error. Could anyone
>> help
>> > me out
>> >
>> > Block pool BP-1894309265-124.123.215.187-1361374377471 (storage id
>> > DS-1175433225-124.123.215.187-50010-1361374235895) service to nagarjuna/
>> > 124.123.215.187:9000
>> > java.io.IOException: Incompatible clusterIDs in
>> >
>> /Users/nagarjunak/Documents/hadoop-install/hadoop-2.0.3-alpha/tmp_20/dfs/data:
>> > namenode clusterID = CID-800b7eb1-7a83-4649-86b7-617913e82ad8; datanode
>> > clusterID = CID-1740b490-8413-451c-926f-2f0676b217ec
>> >         at
>> >
>> org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:391)
>> >         at
>> >
>> org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:191)
>> >         at
>> >
>> org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:219)
>> >         at
>> >
>> org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:850)
>> >         at
>> >
>> org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:821)
>> >         at
>> >
>> org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:280)
>> >         at
>> >
>> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:222)
>> >         at
>> >
>> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:664)
>> >         at java.lang.Thread.run(Thread.java:680)
>> > 2013-02-20 21:03:39,856 WARN
>> > org.apache.hadoop.hdfs.server.datanode.DataNode: Ending block pool
>> service
>> > for: Block pool BP-1894309265-124.123.215.187-1361374377471 (storage id
>> > DS-1175433225-124.123.215.187-50010-1361374235895) service to nagarjuna/
>> > 124.123.215.187:9000
>> > 2013-02-20 21:03:39,958 INFO
>> > org.apache.hadoop.hdfs.server.datanode.DataNode: Removed Block pool
>> > BP-1894309265-124.123.215.187-1361374377471 (storage id
>> > DS-1175433225-124.123.215.187-50010-1361374235895)
>> > 2013-02-20 21:03:41,959 WARN
>> > org.apache.hadoop.hdfs.server.datanode.DataNode: Exiting Datanode
>> > 2013-02-20 21:03:41,961 INFO org.apache.hadoop.util.ExitUtil: Exiting
>> with
>> > status 0
>> > 2013-02-20 21:03:41,963 INFO
>> > org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
>> > /************************************************************
>> > SHUTDOWN_MSG: Shutting down DataNode at nagarjuna/124.123.215.187
>> > ************************************************************/
>> >****
>>
>> ** **
>>
>
>

Re: In Compatible clusterIDs

Posted by Alex Current <ac...@gmail.com>.
Have you installed Hadoop on this node before?  If so, did you clean out
all of your old data dirs?


On Wed, Feb 20, 2013 at 4:41 PM, nagarjuna kanamarlapudi <
nagarjuna.kanamarlapudi@gmail.com> wrote:

> /etc/hosts
>
> 127.0.0.1       nagarjuna
> 255.255.255.255 broadcasthost
> ::1             localhost
> fe80::1%lo0     localhost
>
> and all configuration files pointing to   nagarjuuna  and not localhost.
>  Gave me the above error
>
> 127.0.0.1       localhost
> 127.0.0.1       nagarjuna
> 255.255.255.255 broadcasthost
> ::1             localhost
> fe80::1%lo0     localhost
>
>
> and all configuration files pointing to localhost and not nagarjuna, I am
> able to successsfully start the cluster.
>
>
> Does it have something to do with password less ssh ?
>
>
>
> On Thu, Feb 21, 2013 at 1:19 AM, Vijay Thakorlal <vi...@hotmail.com>wrote:
>
>> Hi Nagarjuna,****
>>
>> ** **
>>
>> What’s is in your /etc/hosts file? I think the line in logs where it says
>> “DataNodeRegistration(0.0.0.0 [….]”, should be the hostname or IP of the
>> datanode (124.123.215.187 since you said it’s a pseudo-distributed setup)
>> and not 0.0.0.0.****
>>
>> ** **
>>
>> By the way are you using the dfs.hosts parameter for specifying the
>> datanodes that can connect to the namenode?****
>>
>> ** **
>>
>> Vijay****
>>
>> ** **
>>
>> *From:* nagarjuna kanamarlapudi [mailto:nagarjuna.kanamarlapudi@gmail.com]
>>
>> *Sent:* 20 February 2013 15:52
>> *To:* user@hadoop.apache.org
>> *Subject:* Re: In Compatible clusterIDs****
>>
>> ** **
>>
>> ** **
>>
>> Hi Jean Marc,****
>>
>> ** **
>>
>> Yes, this is the cluster I am trying  to create and then will scale up.**
>> **
>>
>> ** **
>>
>> As per your suggestion I deleted the folder
>> /Users/nagarjunak/Documents/hadoop-install/hadoop-2.0.3-alpha/tmp_20 an
>> formatted the cluster.****
>>
>> ** **
>>
>> ** **
>>
>> Now I get the following error.****
>>
>> ** **
>>
>> ** **
>>
>> 2013-02-20 21:17:25,668 FATAL
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for
>> block pool Block pool BP-811644675-124.123.215.187-1361375214801 (storage
>> id DS-1515823288-124.123.215.187-50010-1361375245435) service to nagarjuna/
>> 124.123.215.187:9000****
>>
>> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.protocol.DisallowedDatanodeException):
>> Datanode denied communication with namenode: DatanodeRegistration(0.0.0.0,
>> storageID=DS-1515823288-124.123.215.187-50010-1361375245435,
>> infoPort=50075, ipcPort=50020,
>> storageInfo=lv=-40;cid=CID-723b02a7-3441-41b5-8045-2a45a9cf96b0;nsid=1805451571;c=0)
>> ****
>>
>>         at
>> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.registerDatanode(DatanodeManager.java:629)
>> ****
>>
>>         at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.registerDatanode(FSNamesystem.java:3459)
>> ****
>>
>>         at
>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.registerDatanode(NameNodeRpcServer.java:881)
>> ****
>>
>>         at
>> org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolServerSideTranslatorPB.registerDatanode(DatanodeProtocolServerSideTranslatorPB.java:90)
>> ****
>>
>>         at
>> org.apache.hadoop.hdfs.protocol.proto.DatanodeProtocolProtos$DatanodeProtocolService$2.callBlockingMethod(DatanodeProtocolProtos.java:18295)
>> ****
>>
>>         at org.apache.hadoop.ipc.Protob****
>>
>> ** **
>>
>> ** **
>>
>> On Wed, Feb 20, 2013 at 9:10 PM, Jean-Marc Spaggiari <
>> jean-marc@spaggiari.org> wrote:****
>>
>> Hi Nagarjuna,
>>
>> Is it a test cluster? Do you have another cluster running close-by?
>> Also, is it your first try?
>>
>> It seems there is some previous data in the dfs directory which is not
>> in sync with the last installation.
>>
>> Maybe you can remove the content of
>> /Users/nagarjunak/Documents/hadoop-install/hadoop-2.0.3-alpha/tmp_20
>> if it's not usefull for you, reformat your node and restart it?
>>
>> JM
>>
>> 2013/2/20, nagarjuna kanamarlapudi <na...@gmail.com>:**
>> **
>>
>> > Hi,
>> >
>> > I am trying to setup single node cluster of hadop 2.0.*
>> >
>> > When trying to start datanode I got the following error. Could anyone
>> help
>> > me out
>> >
>> > Block pool BP-1894309265-124.123.215.187-1361374377471 (storage id
>> > DS-1175433225-124.123.215.187-50010-1361374235895) service to nagarjuna/
>> > 124.123.215.187:9000
>> > java.io.IOException: Incompatible clusterIDs in
>> >
>> /Users/nagarjunak/Documents/hadoop-install/hadoop-2.0.3-alpha/tmp_20/dfs/data:
>> > namenode clusterID = CID-800b7eb1-7a83-4649-86b7-617913e82ad8; datanode
>> > clusterID = CID-1740b490-8413-451c-926f-2f0676b217ec
>> >         at
>> >
>> org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:391)
>> >         at
>> >
>> org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:191)
>> >         at
>> >
>> org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:219)
>> >         at
>> >
>> org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:850)
>> >         at
>> >
>> org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:821)
>> >         at
>> >
>> org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:280)
>> >         at
>> >
>> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:222)
>> >         at
>> >
>> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:664)
>> >         at java.lang.Thread.run(Thread.java:680)
>> > 2013-02-20 21:03:39,856 WARN
>> > org.apache.hadoop.hdfs.server.datanode.DataNode: Ending block pool
>> service
>> > for: Block pool BP-1894309265-124.123.215.187-1361374377471 (storage id
>> > DS-1175433225-124.123.215.187-50010-1361374235895) service to nagarjuna/
>> > 124.123.215.187:9000
>> > 2013-02-20 21:03:39,958 INFO
>> > org.apache.hadoop.hdfs.server.datanode.DataNode: Removed Block pool
>> > BP-1894309265-124.123.215.187-1361374377471 (storage id
>> > DS-1175433225-124.123.215.187-50010-1361374235895)
>> > 2013-02-20 21:03:41,959 WARN
>> > org.apache.hadoop.hdfs.server.datanode.DataNode: Exiting Datanode
>> > 2013-02-20 21:03:41,961 INFO org.apache.hadoop.util.ExitUtil: Exiting
>> with
>> > status 0
>> > 2013-02-20 21:03:41,963 INFO
>> > org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
>> > /************************************************************
>> > SHUTDOWN_MSG: Shutting down DataNode at nagarjuna/124.123.215.187
>> > ************************************************************/
>> >****
>>
>> ** **
>>
>
>

Re: In Compatible clusterIDs

Posted by nagarjuna kanamarlapudi <na...@gmail.com>.
/etc/hosts

127.0.0.1       nagarjuna
255.255.255.255 broadcasthost
::1             localhost
fe80::1%lo0     localhost

and all configuration files pointing to   nagarjuuna  and not localhost.
 Gave me the above error

127.0.0.1       localhost
127.0.0.1       nagarjuna
255.255.255.255 broadcasthost
::1             localhost
fe80::1%lo0     localhost


and all configuration files pointing to localhost and not nagarjuna, I am
able to successsfully start the cluster.


Does it have something to do with password less ssh ?



On Thu, Feb 21, 2013 at 1:19 AM, Vijay Thakorlal <vi...@hotmail.com>wrote:

> Hi Nagarjuna,****
>
> ** **
>
> What’s is in your /etc/hosts file? I think the line in logs where it says
> “DataNodeRegistration(0.0.0.0 [….]”, should be the hostname or IP of the
> datanode (124.123.215.187 since you said it’s a pseudo-distributed setup)
> and not 0.0.0.0.****
>
> ** **
>
> By the way are you using the dfs.hosts parameter for specifying the
> datanodes that can connect to the namenode?****
>
> ** **
>
> Vijay****
>
> ** **
>
> *From:* nagarjuna kanamarlapudi [mailto:nagarjuna.kanamarlapudi@gmail.com]
>
> *Sent:* 20 February 2013 15:52
> *To:* user@hadoop.apache.org
> *Subject:* Re: In Compatible clusterIDs****
>
> ** **
>
> ** **
>
> Hi Jean Marc,****
>
> ** **
>
> Yes, this is the cluster I am trying  to create and then will scale up.***
> *
>
> ** **
>
> As per your suggestion I deleted the folder
> /Users/nagarjunak/Documents/hadoop-install/hadoop-2.0.3-alpha/tmp_20 an
> formatted the cluster.****
>
> ** **
>
> ** **
>
> Now I get the following error.****
>
> ** **
>
> ** **
>
> 2013-02-20 21:17:25,668 FATAL
> org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for
> block pool Block pool BP-811644675-124.123.215.187-1361375214801 (storage
> id DS-1515823288-124.123.215.187-50010-1361375245435) service to nagarjuna/
> 124.123.215.187:9000****
>
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.protocol.DisallowedDatanodeException):
> Datanode denied communication with namenode: DatanodeRegistration(0.0.0.0,
> storageID=DS-1515823288-124.123.215.187-50010-1361375245435,
> infoPort=50075, ipcPort=50020,
> storageInfo=lv=-40;cid=CID-723b02a7-3441-41b5-8045-2a45a9cf96b0;nsid=1805451571;c=0)
> ****
>
>         at
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.registerDatanode(DatanodeManager.java:629)
> ****
>
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.registerDatanode(FSNamesystem.java:3459)
> ****
>
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.registerDatanode(NameNodeRpcServer.java:881)
> ****
>
>         at
> org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolServerSideTranslatorPB.registerDatanode(DatanodeProtocolServerSideTranslatorPB.java:90)
> ****
>
>         at
> org.apache.hadoop.hdfs.protocol.proto.DatanodeProtocolProtos$DatanodeProtocolService$2.callBlockingMethod(DatanodeProtocolProtos.java:18295)
> ****
>
>         at org.apache.hadoop.ipc.Protob****
>
> ** **
>
> ** **
>
> On Wed, Feb 20, 2013 at 9:10 PM, Jean-Marc Spaggiari <
> jean-marc@spaggiari.org> wrote:****
>
> Hi Nagarjuna,
>
> Is it a test cluster? Do you have another cluster running close-by?
> Also, is it your first try?
>
> It seems there is some previous data in the dfs directory which is not
> in sync with the last installation.
>
> Maybe you can remove the content of
> /Users/nagarjunak/Documents/hadoop-install/hadoop-2.0.3-alpha/tmp_20
> if it's not usefull for you, reformat your node and restart it?
>
> JM
>
> 2013/2/20, nagarjuna kanamarlapudi <na...@gmail.com>:***
> *
>
> > Hi,
> >
> > I am trying to setup single node cluster of hadop 2.0.*
> >
> > When trying to start datanode I got the following error. Could anyone
> help
> > me out
> >
> > Block pool BP-1894309265-124.123.215.187-1361374377471 (storage id
> > DS-1175433225-124.123.215.187-50010-1361374235895) service to nagarjuna/
> > 124.123.215.187:9000
> > java.io.IOException: Incompatible clusterIDs in
> >
> /Users/nagarjunak/Documents/hadoop-install/hadoop-2.0.3-alpha/tmp_20/dfs/data:
> > namenode clusterID = CID-800b7eb1-7a83-4649-86b7-617913e82ad8; datanode
> > clusterID = CID-1740b490-8413-451c-926f-2f0676b217ec
> >         at
> >
> org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:391)
> >         at
> >
> org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:191)
> >         at
> >
> org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:219)
> >         at
> >
> org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:850)
> >         at
> >
> org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:821)
> >         at
> >
> org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:280)
> >         at
> >
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:222)
> >         at
> >
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:664)
> >         at java.lang.Thread.run(Thread.java:680)
> > 2013-02-20 21:03:39,856 WARN
> > org.apache.hadoop.hdfs.server.datanode.DataNode: Ending block pool
> service
> > for: Block pool BP-1894309265-124.123.215.187-1361374377471 (storage id
> > DS-1175433225-124.123.215.187-50010-1361374235895) service to nagarjuna/
> > 124.123.215.187:9000
> > 2013-02-20 21:03:39,958 INFO
> > org.apache.hadoop.hdfs.server.datanode.DataNode: Removed Block pool
> > BP-1894309265-124.123.215.187-1361374377471 (storage id
> > DS-1175433225-124.123.215.187-50010-1361374235895)
> > 2013-02-20 21:03:41,959 WARN
> > org.apache.hadoop.hdfs.server.datanode.DataNode: Exiting Datanode
> > 2013-02-20 21:03:41,961 INFO org.apache.hadoop.util.ExitUtil: Exiting
> with
> > status 0
> > 2013-02-20 21:03:41,963 INFO
> > org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
> > /************************************************************
> > SHUTDOWN_MSG: Shutting down DataNode at nagarjuna/124.123.215.187
> > ************************************************************/
> >****
>
> ** **
>

Re: In Compatible clusterIDs

Posted by nagarjuna kanamarlapudi <na...@gmail.com>.
/etc/hosts

127.0.0.1       nagarjuna
255.255.255.255 broadcasthost
::1             localhost
fe80::1%lo0     localhost

and all configuration files pointing to   nagarjuuna  and not localhost.
 Gave me the above error

127.0.0.1       localhost
127.0.0.1       nagarjuna
255.255.255.255 broadcasthost
::1             localhost
fe80::1%lo0     localhost


and all configuration files pointing to localhost and not nagarjuna, I am
able to successsfully start the cluster.


Does it have something to do with password less ssh ?



On Thu, Feb 21, 2013 at 1:19 AM, Vijay Thakorlal <vi...@hotmail.com>wrote:

> Hi Nagarjuna,****
>
> ** **
>
> What’s is in your /etc/hosts file? I think the line in logs where it says
> “DataNodeRegistration(0.0.0.0 [….]”, should be the hostname or IP of the
> datanode (124.123.215.187 since you said it’s a pseudo-distributed setup)
> and not 0.0.0.0.****
>
> ** **
>
> By the way are you using the dfs.hosts parameter for specifying the
> datanodes that can connect to the namenode?****
>
> ** **
>
> Vijay****
>
> ** **
>
> *From:* nagarjuna kanamarlapudi [mailto:nagarjuna.kanamarlapudi@gmail.com]
>
> *Sent:* 20 February 2013 15:52
> *To:* user@hadoop.apache.org
> *Subject:* Re: In Compatible clusterIDs****
>
> ** **
>
> ** **
>
> Hi Jean Marc,****
>
> ** **
>
> Yes, this is the cluster I am trying  to create and then will scale up.***
> *
>
> ** **
>
> As per your suggestion I deleted the folder
> /Users/nagarjunak/Documents/hadoop-install/hadoop-2.0.3-alpha/tmp_20 an
> formatted the cluster.****
>
> ** **
>
> ** **
>
> Now I get the following error.****
>
> ** **
>
> ** **
>
> 2013-02-20 21:17:25,668 FATAL
> org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for
> block pool Block pool BP-811644675-124.123.215.187-1361375214801 (storage
> id DS-1515823288-124.123.215.187-50010-1361375245435) service to nagarjuna/
> 124.123.215.187:9000****
>
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.protocol.DisallowedDatanodeException):
> Datanode denied communication with namenode: DatanodeRegistration(0.0.0.0,
> storageID=DS-1515823288-124.123.215.187-50010-1361375245435,
> infoPort=50075, ipcPort=50020,
> storageInfo=lv=-40;cid=CID-723b02a7-3441-41b5-8045-2a45a9cf96b0;nsid=1805451571;c=0)
> ****
>
>         at
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.registerDatanode(DatanodeManager.java:629)
> ****
>
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.registerDatanode(FSNamesystem.java:3459)
> ****
>
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.registerDatanode(NameNodeRpcServer.java:881)
> ****
>
>         at
> org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolServerSideTranslatorPB.registerDatanode(DatanodeProtocolServerSideTranslatorPB.java:90)
> ****
>
>         at
> org.apache.hadoop.hdfs.protocol.proto.DatanodeProtocolProtos$DatanodeProtocolService$2.callBlockingMethod(DatanodeProtocolProtos.java:18295)
> ****
>
>         at org.apache.hadoop.ipc.Protob****
>
> ** **
>
> ** **
>
> On Wed, Feb 20, 2013 at 9:10 PM, Jean-Marc Spaggiari <
> jean-marc@spaggiari.org> wrote:****
>
> Hi Nagarjuna,
>
> Is it a test cluster? Do you have another cluster running close-by?
> Also, is it your first try?
>
> It seems there is some previous data in the dfs directory which is not
> in sync with the last installation.
>
> Maybe you can remove the content of
> /Users/nagarjunak/Documents/hadoop-install/hadoop-2.0.3-alpha/tmp_20
> if it's not usefull for you, reformat your node and restart it?
>
> JM
>
> 2013/2/20, nagarjuna kanamarlapudi <na...@gmail.com>:***
> *
>
> > Hi,
> >
> > I am trying to setup single node cluster of hadop 2.0.*
> >
> > When trying to start datanode I got the following error. Could anyone
> help
> > me out
> >
> > Block pool BP-1894309265-124.123.215.187-1361374377471 (storage id
> > DS-1175433225-124.123.215.187-50010-1361374235895) service to nagarjuna/
> > 124.123.215.187:9000
> > java.io.IOException: Incompatible clusterIDs in
> >
> /Users/nagarjunak/Documents/hadoop-install/hadoop-2.0.3-alpha/tmp_20/dfs/data:
> > namenode clusterID = CID-800b7eb1-7a83-4649-86b7-617913e82ad8; datanode
> > clusterID = CID-1740b490-8413-451c-926f-2f0676b217ec
> >         at
> >
> org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:391)
> >         at
> >
> org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:191)
> >         at
> >
> org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:219)
> >         at
> >
> org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:850)
> >         at
> >
> org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:821)
> >         at
> >
> org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:280)
> >         at
> >
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:222)
> >         at
> >
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:664)
> >         at java.lang.Thread.run(Thread.java:680)
> > 2013-02-20 21:03:39,856 WARN
> > org.apache.hadoop.hdfs.server.datanode.DataNode: Ending block pool
> service
> > for: Block pool BP-1894309265-124.123.215.187-1361374377471 (storage id
> > DS-1175433225-124.123.215.187-50010-1361374235895) service to nagarjuna/
> > 124.123.215.187:9000
> > 2013-02-20 21:03:39,958 INFO
> > org.apache.hadoop.hdfs.server.datanode.DataNode: Removed Block pool
> > BP-1894309265-124.123.215.187-1361374377471 (storage id
> > DS-1175433225-124.123.215.187-50010-1361374235895)
> > 2013-02-20 21:03:41,959 WARN
> > org.apache.hadoop.hdfs.server.datanode.DataNode: Exiting Datanode
> > 2013-02-20 21:03:41,961 INFO org.apache.hadoop.util.ExitUtil: Exiting
> with
> > status 0
> > 2013-02-20 21:03:41,963 INFO
> > org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
> > /************************************************************
> > SHUTDOWN_MSG: Shutting down DataNode at nagarjuna/124.123.215.187
> > ************************************************************/
> >****
>
> ** **
>

Re: In Compatible clusterIDs

Posted by nagarjuna kanamarlapudi <na...@gmail.com>.
/etc/hosts

127.0.0.1       nagarjuna
255.255.255.255 broadcasthost
::1             localhost
fe80::1%lo0     localhost

and all configuration files pointing to   nagarjuuna  and not localhost.
 Gave me the above error

127.0.0.1       localhost
127.0.0.1       nagarjuna
255.255.255.255 broadcasthost
::1             localhost
fe80::1%lo0     localhost


and all configuration files pointing to localhost and not nagarjuna, I am
able to successsfully start the cluster.


Does it have something to do with password less ssh ?



On Thu, Feb 21, 2013 at 1:19 AM, Vijay Thakorlal <vi...@hotmail.com>wrote:

> Hi Nagarjuna,****
>
> ** **
>
> What’s is in your /etc/hosts file? I think the line in logs where it says
> “DataNodeRegistration(0.0.0.0 [….]”, should be the hostname or IP of the
> datanode (124.123.215.187 since you said it’s a pseudo-distributed setup)
> and not 0.0.0.0.****
>
> ** **
>
> By the way are you using the dfs.hosts parameter for specifying the
> datanodes that can connect to the namenode?****
>
> ** **
>
> Vijay****
>
> ** **
>
> *From:* nagarjuna kanamarlapudi [mailto:nagarjuna.kanamarlapudi@gmail.com]
>
> *Sent:* 20 February 2013 15:52
> *To:* user@hadoop.apache.org
> *Subject:* Re: In Compatible clusterIDs****
>
> ** **
>
> ** **
>
> Hi Jean Marc,****
>
> ** **
>
> Yes, this is the cluster I am trying  to create and then will scale up.***
> *
>
> ** **
>
> As per your suggestion I deleted the folder
> /Users/nagarjunak/Documents/hadoop-install/hadoop-2.0.3-alpha/tmp_20 an
> formatted the cluster.****
>
> ** **
>
> ** **
>
> Now I get the following error.****
>
> ** **
>
> ** **
>
> 2013-02-20 21:17:25,668 FATAL
> org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for
> block pool Block pool BP-811644675-124.123.215.187-1361375214801 (storage
> id DS-1515823288-124.123.215.187-50010-1361375245435) service to nagarjuna/
> 124.123.215.187:9000****
>
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.protocol.DisallowedDatanodeException):
> Datanode denied communication with namenode: DatanodeRegistration(0.0.0.0,
> storageID=DS-1515823288-124.123.215.187-50010-1361375245435,
> infoPort=50075, ipcPort=50020,
> storageInfo=lv=-40;cid=CID-723b02a7-3441-41b5-8045-2a45a9cf96b0;nsid=1805451571;c=0)
> ****
>
>         at
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.registerDatanode(DatanodeManager.java:629)
> ****
>
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.registerDatanode(FSNamesystem.java:3459)
> ****
>
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.registerDatanode(NameNodeRpcServer.java:881)
> ****
>
>         at
> org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolServerSideTranslatorPB.registerDatanode(DatanodeProtocolServerSideTranslatorPB.java:90)
> ****
>
>         at
> org.apache.hadoop.hdfs.protocol.proto.DatanodeProtocolProtos$DatanodeProtocolService$2.callBlockingMethod(DatanodeProtocolProtos.java:18295)
> ****
>
>         at org.apache.hadoop.ipc.Protob****
>
> ** **
>
> ** **
>
> On Wed, Feb 20, 2013 at 9:10 PM, Jean-Marc Spaggiari <
> jean-marc@spaggiari.org> wrote:****
>
> Hi Nagarjuna,
>
> Is it a test cluster? Do you have another cluster running close-by?
> Also, is it your first try?
>
> It seems there is some previous data in the dfs directory which is not
> in sync with the last installation.
>
> Maybe you can remove the content of
> /Users/nagarjunak/Documents/hadoop-install/hadoop-2.0.3-alpha/tmp_20
> if it's not usefull for you, reformat your node and restart it?
>
> JM
>
> 2013/2/20, nagarjuna kanamarlapudi <na...@gmail.com>:***
> *
>
> > Hi,
> >
> > I am trying to setup single node cluster of hadop 2.0.*
> >
> > When trying to start datanode I got the following error. Could anyone
> help
> > me out
> >
> > Block pool BP-1894309265-124.123.215.187-1361374377471 (storage id
> > DS-1175433225-124.123.215.187-50010-1361374235895) service to nagarjuna/
> > 124.123.215.187:9000
> > java.io.IOException: Incompatible clusterIDs in
> >
> /Users/nagarjunak/Documents/hadoop-install/hadoop-2.0.3-alpha/tmp_20/dfs/data:
> > namenode clusterID = CID-800b7eb1-7a83-4649-86b7-617913e82ad8; datanode
> > clusterID = CID-1740b490-8413-451c-926f-2f0676b217ec
> >         at
> >
> org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:391)
> >         at
> >
> org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:191)
> >         at
> >
> org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:219)
> >         at
> >
> org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:850)
> >         at
> >
> org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:821)
> >         at
> >
> org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:280)
> >         at
> >
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:222)
> >         at
> >
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:664)
> >         at java.lang.Thread.run(Thread.java:680)
> > 2013-02-20 21:03:39,856 WARN
> > org.apache.hadoop.hdfs.server.datanode.DataNode: Ending block pool
> service
> > for: Block pool BP-1894309265-124.123.215.187-1361374377471 (storage id
> > DS-1175433225-124.123.215.187-50010-1361374235895) service to nagarjuna/
> > 124.123.215.187:9000
> > 2013-02-20 21:03:39,958 INFO
> > org.apache.hadoop.hdfs.server.datanode.DataNode: Removed Block pool
> > BP-1894309265-124.123.215.187-1361374377471 (storage id
> > DS-1175433225-124.123.215.187-50010-1361374235895)
> > 2013-02-20 21:03:41,959 WARN
> > org.apache.hadoop.hdfs.server.datanode.DataNode: Exiting Datanode
> > 2013-02-20 21:03:41,961 INFO org.apache.hadoop.util.ExitUtil: Exiting
> with
> > status 0
> > 2013-02-20 21:03:41,963 INFO
> > org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
> > /************************************************************
> > SHUTDOWN_MSG: Shutting down DataNode at nagarjuna/124.123.215.187
> > ************************************************************/
> >****
>
> ** **
>

Re: In Compatible clusterIDs

Posted by nagarjuna kanamarlapudi <na...@gmail.com>.
/etc/hosts

127.0.0.1       nagarjuna
255.255.255.255 broadcasthost
::1             localhost
fe80::1%lo0     localhost

and all configuration files pointing to   nagarjuuna  and not localhost.
 Gave me the above error

127.0.0.1       localhost
127.0.0.1       nagarjuna
255.255.255.255 broadcasthost
::1             localhost
fe80::1%lo0     localhost


and all configuration files pointing to localhost and not nagarjuna, I am
able to successsfully start the cluster.


Does it have something to do with password less ssh ?



On Thu, Feb 21, 2013 at 1:19 AM, Vijay Thakorlal <vi...@hotmail.com>wrote:

> Hi Nagarjuna,****
>
> ** **
>
> What’s is in your /etc/hosts file? I think the line in logs where it says
> “DataNodeRegistration(0.0.0.0 [….]”, should be the hostname or IP of the
> datanode (124.123.215.187 since you said it’s a pseudo-distributed setup)
> and not 0.0.0.0.****
>
> ** **
>
> By the way are you using the dfs.hosts parameter for specifying the
> datanodes that can connect to the namenode?****
>
> ** **
>
> Vijay****
>
> ** **
>
> *From:* nagarjuna kanamarlapudi [mailto:nagarjuna.kanamarlapudi@gmail.com]
>
> *Sent:* 20 February 2013 15:52
> *To:* user@hadoop.apache.org
> *Subject:* Re: In Compatible clusterIDs****
>
> ** **
>
> ** **
>
> Hi Jean Marc,****
>
> ** **
>
> Yes, this is the cluster I am trying  to create and then will scale up.***
> *
>
> ** **
>
> As per your suggestion I deleted the folder
> /Users/nagarjunak/Documents/hadoop-install/hadoop-2.0.3-alpha/tmp_20 an
> formatted the cluster.****
>
> ** **
>
> ** **
>
> Now I get the following error.****
>
> ** **
>
> ** **
>
> 2013-02-20 21:17:25,668 FATAL
> org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for
> block pool Block pool BP-811644675-124.123.215.187-1361375214801 (storage
> id DS-1515823288-124.123.215.187-50010-1361375245435) service to nagarjuna/
> 124.123.215.187:9000****
>
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.protocol.DisallowedDatanodeException):
> Datanode denied communication with namenode: DatanodeRegistration(0.0.0.0,
> storageID=DS-1515823288-124.123.215.187-50010-1361375245435,
> infoPort=50075, ipcPort=50020,
> storageInfo=lv=-40;cid=CID-723b02a7-3441-41b5-8045-2a45a9cf96b0;nsid=1805451571;c=0)
> ****
>
>         at
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.registerDatanode(DatanodeManager.java:629)
> ****
>
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.registerDatanode(FSNamesystem.java:3459)
> ****
>
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.registerDatanode(NameNodeRpcServer.java:881)
> ****
>
>         at
> org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolServerSideTranslatorPB.registerDatanode(DatanodeProtocolServerSideTranslatorPB.java:90)
> ****
>
>         at
> org.apache.hadoop.hdfs.protocol.proto.DatanodeProtocolProtos$DatanodeProtocolService$2.callBlockingMethod(DatanodeProtocolProtos.java:18295)
> ****
>
>         at org.apache.hadoop.ipc.Protob****
>
> ** **
>
> ** **
>
> On Wed, Feb 20, 2013 at 9:10 PM, Jean-Marc Spaggiari <
> jean-marc@spaggiari.org> wrote:****
>
> Hi Nagarjuna,
>
> Is it a test cluster? Do you have another cluster running close-by?
> Also, is it your first try?
>
> It seems there is some previous data in the dfs directory which is not
> in sync with the last installation.
>
> Maybe you can remove the content of
> /Users/nagarjunak/Documents/hadoop-install/hadoop-2.0.3-alpha/tmp_20
> if it's not usefull for you, reformat your node and restart it?
>
> JM
>
> 2013/2/20, nagarjuna kanamarlapudi <na...@gmail.com>:***
> *
>
> > Hi,
> >
> > I am trying to setup single node cluster of hadop 2.0.*
> >
> > When trying to start datanode I got the following error. Could anyone
> help
> > me out
> >
> > Block pool BP-1894309265-124.123.215.187-1361374377471 (storage id
> > DS-1175433225-124.123.215.187-50010-1361374235895) service to nagarjuna/
> > 124.123.215.187:9000
> > java.io.IOException: Incompatible clusterIDs in
> >
> /Users/nagarjunak/Documents/hadoop-install/hadoop-2.0.3-alpha/tmp_20/dfs/data:
> > namenode clusterID = CID-800b7eb1-7a83-4649-86b7-617913e82ad8; datanode
> > clusterID = CID-1740b490-8413-451c-926f-2f0676b217ec
> >         at
> >
> org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:391)
> >         at
> >
> org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:191)
> >         at
> >
> org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:219)
> >         at
> >
> org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:850)
> >         at
> >
> org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:821)
> >         at
> >
> org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:280)
> >         at
> >
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:222)
> >         at
> >
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:664)
> >         at java.lang.Thread.run(Thread.java:680)
> > 2013-02-20 21:03:39,856 WARN
> > org.apache.hadoop.hdfs.server.datanode.DataNode: Ending block pool
> service
> > for: Block pool BP-1894309265-124.123.215.187-1361374377471 (storage id
> > DS-1175433225-124.123.215.187-50010-1361374235895) service to nagarjuna/
> > 124.123.215.187:9000
> > 2013-02-20 21:03:39,958 INFO
> > org.apache.hadoop.hdfs.server.datanode.DataNode: Removed Block pool
> > BP-1894309265-124.123.215.187-1361374377471 (storage id
> > DS-1175433225-124.123.215.187-50010-1361374235895)
> > 2013-02-20 21:03:41,959 WARN
> > org.apache.hadoop.hdfs.server.datanode.DataNode: Exiting Datanode
> > 2013-02-20 21:03:41,961 INFO org.apache.hadoop.util.ExitUtil: Exiting
> with
> > status 0
> > 2013-02-20 21:03:41,963 INFO
> > org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
> > /************************************************************
> > SHUTDOWN_MSG: Shutting down DataNode at nagarjuna/124.123.215.187
> > ************************************************************/
> >****
>
> ** **
>

RE: In Compatible clusterIDs

Posted by Vijay Thakorlal <vi...@hotmail.com>.
Hi Nagarjuna,

 

What's is in your /etc/hosts file? I think the line in logs where it says
"DataNodeRegistration(0.0.0.0 [..]", should be the hostname or IP of the
datanode (124.123.215.187 since you said it's a pseudo-distributed setup)
and not 0.0.0.0.

 

By the way are you using the dfs.hosts parameter for specifying the
datanodes that can connect to the namenode?

 

Vijay

 

From: nagarjuna kanamarlapudi [mailto:nagarjuna.kanamarlapudi@gmail.com] 
Sent: 20 February 2013 15:52
To: user@hadoop.apache.org
Subject: Re: In Compatible clusterIDs

 

 

Hi Jean Marc,

 

Yes, this is the cluster I am trying  to create and then will scale up.

 

As per your suggestion I deleted the folder
/Users/nagarjunak/Documents/hadoop-install/hadoop-2.0.3-alpha/tmp_20 an
formatted the cluster.

 

 

Now I get the following error.

 

 

2013-02-20 21:17:25,668 FATAL
org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for
block pool Block pool BP-811644675-124.123.215.187-1361375214801 (storage id
DS-1515823288-124.123.215.187-50010-1361375245435) service to
nagarjuna/124.123.215.187:9000

org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.protocol
.DisallowedDatanodeException): Datanode denied communication with namenode:
DatanodeRegistration(0.0.0.0,
storageID=DS-1515823288-124.123.215.187-50010-1361375245435, infoPort=50075,
ipcPort=50020,
storageInfo=lv=-40;cid=CID-723b02a7-3441-41b5-8045-2a45a9cf96b0;nsid=1805451
571;c=0)

        at
org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.registerDatano
de(DatanodeManager.java:629)

        at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.registerDatanode(FSNames
ystem.java:3459)

        at
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.registerDatanode(Na
meNodeRpcServer.java:881)

        at
org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolServerSideTranslatorPB.reg
isterDatanode(DatanodeProtocolServerSideTranslatorPB.java:90)

        at
org.apache.hadoop.hdfs.protocol.proto.DatanodeProtocolProtos$DatanodeProtoco
lService$2.callBlockingMethod(DatanodeProtocolProtos.java:18295)

        at org.apache.hadoop.ipc.Protob

 

 

On Wed, Feb 20, 2013 at 9:10 PM, Jean-Marc Spaggiari
<je...@spaggiari.org> wrote:

Hi Nagarjuna,

Is it a test cluster? Do you have another cluster running close-by?
Also, is it your first try?

It seems there is some previous data in the dfs directory which is not
in sync with the last installation.

Maybe you can remove the content of
/Users/nagarjunak/Documents/hadoop-install/hadoop-2.0.3-alpha/tmp_20
if it's not usefull for you, reformat your node and restart it?

JM

2013/2/20, nagarjuna kanamarlapudi <na...@gmail.com>:

> Hi,
>
> I am trying to setup single node cluster of hadop 2.0.*
>
> When trying to start datanode I got the following error. Could anyone help
> me out
>
> Block pool BP-1894309265-124.123.215.187-1361374377471 (storage id
> DS-1175433225-124.123.215.187-50010-1361374235895) service to nagarjuna/
> 124.123.215.187:9000
> java.io.IOException: Incompatible clusterIDs in
>
/Users/nagarjunak/Documents/hadoop-install/hadoop-2.0.3-alpha/tmp_20/dfs/dat
a:
> namenode clusterID = CID-800b7eb1-7a83-4649-86b7-617913e82ad8; datanode
> clusterID = CID-1740b490-8413-451c-926f-2f0676b217ec
>         at
>
org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.
java:391)
>         at
>
org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(Dat
aStorage.java:191)
>         at
>
org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(Dat
aStorage.java:219)
>         at
>
org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:85
0)
>         at
>
org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:
821)
>         at
>
org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceI
nfo(BPOfferService.java:280)
>         at
>
org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshak
e(BPServiceActor.java:222)
>         at
>
org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.jav
a:664)
>         at java.lang.Thread.run(Thread.java:680)
> 2013-02-20 21:03:39,856 WARN
> org.apache.hadoop.hdfs.server.datanode.DataNode: Ending block pool service
> for: Block pool BP-1894309265-124.123.215.187-1361374377471 (storage id
> DS-1175433225-124.123.215.187-50010-1361374235895) service to nagarjuna/
> 124.123.215.187:9000
> 2013-02-20 21:03:39,958 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Removed Block pool
> BP-1894309265-124.123.215.187-1361374377471 (storage id
> DS-1175433225-124.123.215.187-50010-1361374235895)
> 2013-02-20 21:03:41,959 WARN
> org.apache.hadoop.hdfs.server.datanode.DataNode: Exiting Datanode
> 2013-02-20 21:03:41,961 INFO org.apache.hadoop.util.ExitUtil: Exiting with
> status 0
> 2013-02-20 21:03:41,963 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
> /************************************************************
> SHUTDOWN_MSG: Shutting down DataNode at nagarjuna/124.123.215.187
> ************************************************************/
>

 


RE: In Compatible clusterIDs

Posted by Vijay Thakorlal <vi...@hotmail.com>.
Hi Nagarjuna,

 

What's is in your /etc/hosts file? I think the line in logs where it says
"DataNodeRegistration(0.0.0.0 [..]", should be the hostname or IP of the
datanode (124.123.215.187 since you said it's a pseudo-distributed setup)
and not 0.0.0.0.

 

By the way are you using the dfs.hosts parameter for specifying the
datanodes that can connect to the namenode?

 

Vijay

 

From: nagarjuna kanamarlapudi [mailto:nagarjuna.kanamarlapudi@gmail.com] 
Sent: 20 February 2013 15:52
To: user@hadoop.apache.org
Subject: Re: In Compatible clusterIDs

 

 

Hi Jean Marc,

 

Yes, this is the cluster I am trying  to create and then will scale up.

 

As per your suggestion I deleted the folder
/Users/nagarjunak/Documents/hadoop-install/hadoop-2.0.3-alpha/tmp_20 an
formatted the cluster.

 

 

Now I get the following error.

 

 

2013-02-20 21:17:25,668 FATAL
org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for
block pool Block pool BP-811644675-124.123.215.187-1361375214801 (storage id
DS-1515823288-124.123.215.187-50010-1361375245435) service to
nagarjuna/124.123.215.187:9000

org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.protocol
.DisallowedDatanodeException): Datanode denied communication with namenode:
DatanodeRegistration(0.0.0.0,
storageID=DS-1515823288-124.123.215.187-50010-1361375245435, infoPort=50075,
ipcPort=50020,
storageInfo=lv=-40;cid=CID-723b02a7-3441-41b5-8045-2a45a9cf96b0;nsid=1805451
571;c=0)

        at
org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.registerDatano
de(DatanodeManager.java:629)

        at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.registerDatanode(FSNames
ystem.java:3459)

        at
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.registerDatanode(Na
meNodeRpcServer.java:881)

        at
org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolServerSideTranslatorPB.reg
isterDatanode(DatanodeProtocolServerSideTranslatorPB.java:90)

        at
org.apache.hadoop.hdfs.protocol.proto.DatanodeProtocolProtos$DatanodeProtoco
lService$2.callBlockingMethod(DatanodeProtocolProtos.java:18295)

        at org.apache.hadoop.ipc.Protob

 

 

On Wed, Feb 20, 2013 at 9:10 PM, Jean-Marc Spaggiari
<je...@spaggiari.org> wrote:

Hi Nagarjuna,

Is it a test cluster? Do you have another cluster running close-by?
Also, is it your first try?

It seems there is some previous data in the dfs directory which is not
in sync with the last installation.

Maybe you can remove the content of
/Users/nagarjunak/Documents/hadoop-install/hadoop-2.0.3-alpha/tmp_20
if it's not usefull for you, reformat your node and restart it?

JM

2013/2/20, nagarjuna kanamarlapudi <na...@gmail.com>:

> Hi,
>
> I am trying to setup single node cluster of hadop 2.0.*
>
> When trying to start datanode I got the following error. Could anyone help
> me out
>
> Block pool BP-1894309265-124.123.215.187-1361374377471 (storage id
> DS-1175433225-124.123.215.187-50010-1361374235895) service to nagarjuna/
> 124.123.215.187:9000
> java.io.IOException: Incompatible clusterIDs in
>
/Users/nagarjunak/Documents/hadoop-install/hadoop-2.0.3-alpha/tmp_20/dfs/dat
a:
> namenode clusterID = CID-800b7eb1-7a83-4649-86b7-617913e82ad8; datanode
> clusterID = CID-1740b490-8413-451c-926f-2f0676b217ec
>         at
>
org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.
java:391)
>         at
>
org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(Dat
aStorage.java:191)
>         at
>
org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(Dat
aStorage.java:219)
>         at
>
org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:85
0)
>         at
>
org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:
821)
>         at
>
org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceI
nfo(BPOfferService.java:280)
>         at
>
org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshak
e(BPServiceActor.java:222)
>         at
>
org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.jav
a:664)
>         at java.lang.Thread.run(Thread.java:680)
> 2013-02-20 21:03:39,856 WARN
> org.apache.hadoop.hdfs.server.datanode.DataNode: Ending block pool service
> for: Block pool BP-1894309265-124.123.215.187-1361374377471 (storage id
> DS-1175433225-124.123.215.187-50010-1361374235895) service to nagarjuna/
> 124.123.215.187:9000
> 2013-02-20 21:03:39,958 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Removed Block pool
> BP-1894309265-124.123.215.187-1361374377471 (storage id
> DS-1175433225-124.123.215.187-50010-1361374235895)
> 2013-02-20 21:03:41,959 WARN
> org.apache.hadoop.hdfs.server.datanode.DataNode: Exiting Datanode
> 2013-02-20 21:03:41,961 INFO org.apache.hadoop.util.ExitUtil: Exiting with
> status 0
> 2013-02-20 21:03:41,963 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
> /************************************************************
> SHUTDOWN_MSG: Shutting down DataNode at nagarjuna/124.123.215.187
> ************************************************************/
>

 


RE: In Compatible clusterIDs

Posted by Vijay Thakorlal <vi...@hotmail.com>.
Hi Nagarjuna,

 

What's is in your /etc/hosts file? I think the line in logs where it says
"DataNodeRegistration(0.0.0.0 [..]", should be the hostname or IP of the
datanode (124.123.215.187 since you said it's a pseudo-distributed setup)
and not 0.0.0.0.

 

By the way are you using the dfs.hosts parameter for specifying the
datanodes that can connect to the namenode?

 

Vijay

 

From: nagarjuna kanamarlapudi [mailto:nagarjuna.kanamarlapudi@gmail.com] 
Sent: 20 February 2013 15:52
To: user@hadoop.apache.org
Subject: Re: In Compatible clusterIDs

 

 

Hi Jean Marc,

 

Yes, this is the cluster I am trying  to create and then will scale up.

 

As per your suggestion I deleted the folder
/Users/nagarjunak/Documents/hadoop-install/hadoop-2.0.3-alpha/tmp_20 an
formatted the cluster.

 

 

Now I get the following error.

 

 

2013-02-20 21:17:25,668 FATAL
org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for
block pool Block pool BP-811644675-124.123.215.187-1361375214801 (storage id
DS-1515823288-124.123.215.187-50010-1361375245435) service to
nagarjuna/124.123.215.187:9000

org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.protocol
.DisallowedDatanodeException): Datanode denied communication with namenode:
DatanodeRegistration(0.0.0.0,
storageID=DS-1515823288-124.123.215.187-50010-1361375245435, infoPort=50075,
ipcPort=50020,
storageInfo=lv=-40;cid=CID-723b02a7-3441-41b5-8045-2a45a9cf96b0;nsid=1805451
571;c=0)

        at
org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.registerDatano
de(DatanodeManager.java:629)

        at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.registerDatanode(FSNames
ystem.java:3459)

        at
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.registerDatanode(Na
meNodeRpcServer.java:881)

        at
org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolServerSideTranslatorPB.reg
isterDatanode(DatanodeProtocolServerSideTranslatorPB.java:90)

        at
org.apache.hadoop.hdfs.protocol.proto.DatanodeProtocolProtos$DatanodeProtoco
lService$2.callBlockingMethod(DatanodeProtocolProtos.java:18295)

        at org.apache.hadoop.ipc.Protob

 

 

On Wed, Feb 20, 2013 at 9:10 PM, Jean-Marc Spaggiari
<je...@spaggiari.org> wrote:

Hi Nagarjuna,

Is it a test cluster? Do you have another cluster running close-by?
Also, is it your first try?

It seems there is some previous data in the dfs directory which is not
in sync with the last installation.

Maybe you can remove the content of
/Users/nagarjunak/Documents/hadoop-install/hadoop-2.0.3-alpha/tmp_20
if it's not usefull for you, reformat your node and restart it?

JM

2013/2/20, nagarjuna kanamarlapudi <na...@gmail.com>:

> Hi,
>
> I am trying to setup single node cluster of hadop 2.0.*
>
> When trying to start datanode I got the following error. Could anyone help
> me out
>
> Block pool BP-1894309265-124.123.215.187-1361374377471 (storage id
> DS-1175433225-124.123.215.187-50010-1361374235895) service to nagarjuna/
> 124.123.215.187:9000
> java.io.IOException: Incompatible clusterIDs in
>
/Users/nagarjunak/Documents/hadoop-install/hadoop-2.0.3-alpha/tmp_20/dfs/dat
a:
> namenode clusterID = CID-800b7eb1-7a83-4649-86b7-617913e82ad8; datanode
> clusterID = CID-1740b490-8413-451c-926f-2f0676b217ec
>         at
>
org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.
java:391)
>         at
>
org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(Dat
aStorage.java:191)
>         at
>
org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(Dat
aStorage.java:219)
>         at
>
org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:85
0)
>         at
>
org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:
821)
>         at
>
org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceI
nfo(BPOfferService.java:280)
>         at
>
org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshak
e(BPServiceActor.java:222)
>         at
>
org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.jav
a:664)
>         at java.lang.Thread.run(Thread.java:680)
> 2013-02-20 21:03:39,856 WARN
> org.apache.hadoop.hdfs.server.datanode.DataNode: Ending block pool service
> for: Block pool BP-1894309265-124.123.215.187-1361374377471 (storage id
> DS-1175433225-124.123.215.187-50010-1361374235895) service to nagarjuna/
> 124.123.215.187:9000
> 2013-02-20 21:03:39,958 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Removed Block pool
> BP-1894309265-124.123.215.187-1361374377471 (storage id
> DS-1175433225-124.123.215.187-50010-1361374235895)
> 2013-02-20 21:03:41,959 WARN
> org.apache.hadoop.hdfs.server.datanode.DataNode: Exiting Datanode
> 2013-02-20 21:03:41,961 INFO org.apache.hadoop.util.ExitUtil: Exiting with
> status 0
> 2013-02-20 21:03:41,963 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
> /************************************************************
> SHUTDOWN_MSG: Shutting down DataNode at nagarjuna/124.123.215.187
> ************************************************************/
>

 


RE: In Compatible clusterIDs

Posted by Vijay Thakorlal <vi...@hotmail.com>.
Hi Nagarjuna,

 

What's is in your /etc/hosts file? I think the line in logs where it says
"DataNodeRegistration(0.0.0.0 [..]", should be the hostname or IP of the
datanode (124.123.215.187 since you said it's a pseudo-distributed setup)
and not 0.0.0.0.

 

By the way are you using the dfs.hosts parameter for specifying the
datanodes that can connect to the namenode?

 

Vijay

 

From: nagarjuna kanamarlapudi [mailto:nagarjuna.kanamarlapudi@gmail.com] 
Sent: 20 February 2013 15:52
To: user@hadoop.apache.org
Subject: Re: In Compatible clusterIDs

 

 

Hi Jean Marc,

 

Yes, this is the cluster I am trying  to create and then will scale up.

 

As per your suggestion I deleted the folder
/Users/nagarjunak/Documents/hadoop-install/hadoop-2.0.3-alpha/tmp_20 an
formatted the cluster.

 

 

Now I get the following error.

 

 

2013-02-20 21:17:25,668 FATAL
org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for
block pool Block pool BP-811644675-124.123.215.187-1361375214801 (storage id
DS-1515823288-124.123.215.187-50010-1361375245435) service to
nagarjuna/124.123.215.187:9000

org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.protocol
.DisallowedDatanodeException): Datanode denied communication with namenode:
DatanodeRegistration(0.0.0.0,
storageID=DS-1515823288-124.123.215.187-50010-1361375245435, infoPort=50075,
ipcPort=50020,
storageInfo=lv=-40;cid=CID-723b02a7-3441-41b5-8045-2a45a9cf96b0;nsid=1805451
571;c=0)

        at
org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.registerDatano
de(DatanodeManager.java:629)

        at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.registerDatanode(FSNames
ystem.java:3459)

        at
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.registerDatanode(Na
meNodeRpcServer.java:881)

        at
org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolServerSideTranslatorPB.reg
isterDatanode(DatanodeProtocolServerSideTranslatorPB.java:90)

        at
org.apache.hadoop.hdfs.protocol.proto.DatanodeProtocolProtos$DatanodeProtoco
lService$2.callBlockingMethod(DatanodeProtocolProtos.java:18295)

        at org.apache.hadoop.ipc.Protob

 

 

On Wed, Feb 20, 2013 at 9:10 PM, Jean-Marc Spaggiari
<je...@spaggiari.org> wrote:

Hi Nagarjuna,

Is it a test cluster? Do you have another cluster running close-by?
Also, is it your first try?

It seems there is some previous data in the dfs directory which is not
in sync with the last installation.

Maybe you can remove the content of
/Users/nagarjunak/Documents/hadoop-install/hadoop-2.0.3-alpha/tmp_20
if it's not usefull for you, reformat your node and restart it?

JM

2013/2/20, nagarjuna kanamarlapudi <na...@gmail.com>:

> Hi,
>
> I am trying to setup single node cluster of hadop 2.0.*
>
> When trying to start datanode I got the following error. Could anyone help
> me out
>
> Block pool BP-1894309265-124.123.215.187-1361374377471 (storage id
> DS-1175433225-124.123.215.187-50010-1361374235895) service to nagarjuna/
> 124.123.215.187:9000
> java.io.IOException: Incompatible clusterIDs in
>
/Users/nagarjunak/Documents/hadoop-install/hadoop-2.0.3-alpha/tmp_20/dfs/dat
a:
> namenode clusterID = CID-800b7eb1-7a83-4649-86b7-617913e82ad8; datanode
> clusterID = CID-1740b490-8413-451c-926f-2f0676b217ec
>         at
>
org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.
java:391)
>         at
>
org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(Dat
aStorage.java:191)
>         at
>
org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(Dat
aStorage.java:219)
>         at
>
org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:85
0)
>         at
>
org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:
821)
>         at
>
org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceI
nfo(BPOfferService.java:280)
>         at
>
org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshak
e(BPServiceActor.java:222)
>         at
>
org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.jav
a:664)
>         at java.lang.Thread.run(Thread.java:680)
> 2013-02-20 21:03:39,856 WARN
> org.apache.hadoop.hdfs.server.datanode.DataNode: Ending block pool service
> for: Block pool BP-1894309265-124.123.215.187-1361374377471 (storage id
> DS-1175433225-124.123.215.187-50010-1361374235895) service to nagarjuna/
> 124.123.215.187:9000
> 2013-02-20 21:03:39,958 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Removed Block pool
> BP-1894309265-124.123.215.187-1361374377471 (storage id
> DS-1175433225-124.123.215.187-50010-1361374235895)
> 2013-02-20 21:03:41,959 WARN
> org.apache.hadoop.hdfs.server.datanode.DataNode: Exiting Datanode
> 2013-02-20 21:03:41,961 INFO org.apache.hadoop.util.ExitUtil: Exiting with
> status 0
> 2013-02-20 21:03:41,963 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
> /************************************************************
> SHUTDOWN_MSG: Shutting down DataNode at nagarjuna/124.123.215.187
> ************************************************************/
>

 


Re: In Compatible clusterIDs

Posted by nagarjuna kanamarlapudi <na...@gmail.com>.
Hi Jean Marc,

Yes, this is the cluster I am trying  to create and then will scale up.

As per your suggestion I deleted the folder /Users/nagarjunak/Documents/
hadoop-install/hadoop-2.0.3-alpha/tmp_20 an formatted the cluster.


Now I get the following error.


2013-02-20 21:17:25,668 FATAL
org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for
block pool Block pool BP-811644675-124.123.215.187-1361375214801 (storage
id DS-1515823288-124.123.215.187-50010-1361375245435) service to nagarjuna/
124.123.215.187:9000
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.protocol.DisallowedDatanodeException):
Datanode denied communication with namenode: DatanodeRegistration(0.0.0.0,
storageID=DS-1515823288-124.123.215.187-50010-1361375245435,
infoPort=50075, ipcPort=50020,
storageInfo=lv=-40;cid=CID-723b02a7-3441-41b5-8045-2a45a9cf96b0;nsid=1805451571;c=0)
        at
org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.registerDatanode(DatanodeManager.java:629)
        at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.registerDatanode(FSNamesystem.java:3459)
        at
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.registerDatanode(NameNodeRpcServer.java:881)
        at
org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolServerSideTranslatorPB.registerDatanode(DatanodeProtocolServerSideTranslatorPB.java:90)
        at
org.apache.hadoop.hdfs.protocol.proto.DatanodeProtocolProtos$DatanodeProtocolService$2.callBlockingMethod(DatanodeProtocolProtos.java:18295)
        at org.apache.hadoop.ipc.Protob


On Wed, Feb 20, 2013 at 9:10 PM, Jean-Marc Spaggiari <
jean-marc@spaggiari.org> wrote:

> Hi Nagarjuna,
>
> Is it a test cluster? Do you have another cluster running close-by?
> Also, is it your first try?
>
> It seems there is some previous data in the dfs directory which is not
> in sync with the last installation.
>
> Maybe you can remove the content of
> /Users/nagarjunak/Documents/hadoop-install/hadoop-2.0.3-alpha/tmp_20
> if it's not usefull for you, reformat your node and restart it?
>
> JM
>
> 2013/2/20, nagarjuna kanamarlapudi <na...@gmail.com>:
> > Hi,
> >
> > I am trying to setup single node cluster of hadop 2.0.*
> >
> > When trying to start datanode I got the following error. Could anyone
> help
> > me out
> >
> > Block pool BP-1894309265-124.123.215.187-1361374377471 (storage id
> > DS-1175433225-124.123.215.187-50010-1361374235895) service to nagarjuna/
> > 124.123.215.187:9000
> > java.io.IOException: Incompatible clusterIDs in
> >
> /Users/nagarjunak/Documents/hadoop-install/hadoop-2.0.3-alpha/tmp_20/dfs/data:
> > namenode clusterID = CID-800b7eb1-7a83-4649-86b7-617913e82ad8; datanode
> > clusterID = CID-1740b490-8413-451c-926f-2f0676b217ec
> >         at
> >
> org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:391)
> >         at
> >
> org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:191)
> >         at
> >
> org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:219)
> >         at
> >
> org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:850)
> >         at
> >
> org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:821)
> >         at
> >
> org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:280)
> >         at
> >
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:222)
> >         at
> >
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:664)
> >         at java.lang.Thread.run(Thread.java:680)
> > 2013-02-20 21:03:39,856 WARN
> > org.apache.hadoop.hdfs.server.datanode.DataNode: Ending block pool
> service
> > for: Block pool BP-1894309265-124.123.215.187-1361374377471 (storage id
> > DS-1175433225-124.123.215.187-50010-1361374235895) service to nagarjuna/
> > 124.123.215.187:9000
> > 2013-02-20 21:03:39,958 INFO
> > org.apache.hadoop.hdfs.server.datanode.DataNode: Removed Block pool
> > BP-1894309265-124.123.215.187-1361374377471 (storage id
> > DS-1175433225-124.123.215.187-50010-1361374235895)
> > 2013-02-20 21:03:41,959 WARN
> > org.apache.hadoop.hdfs.server.datanode.DataNode: Exiting Datanode
> > 2013-02-20 21:03:41,961 INFO org.apache.hadoop.util.ExitUtil: Exiting
> with
> > status 0
> > 2013-02-20 21:03:41,963 INFO
> > org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
> > /************************************************************
> > SHUTDOWN_MSG: Shutting down DataNode at nagarjuna/124.123.215.187
> > ************************************************************/
> >
>

Re: In Compatible clusterIDs

Posted by nagarjuna kanamarlapudi <na...@gmail.com>.
Hi Jean Marc,

Yes, this is the cluster I am trying  to create and then will scale up.

As per your suggestion I deleted the folder /Users/nagarjunak/Documents/
hadoop-install/hadoop-2.0.3-alpha/tmp_20 an formatted the cluster.


Now I get the following error.


2013-02-20 21:17:25,668 FATAL
org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for
block pool Block pool BP-811644675-124.123.215.187-1361375214801 (storage
id DS-1515823288-124.123.215.187-50010-1361375245435) service to nagarjuna/
124.123.215.187:9000
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.protocol.DisallowedDatanodeException):
Datanode denied communication with namenode: DatanodeRegistration(0.0.0.0,
storageID=DS-1515823288-124.123.215.187-50010-1361375245435,
infoPort=50075, ipcPort=50020,
storageInfo=lv=-40;cid=CID-723b02a7-3441-41b5-8045-2a45a9cf96b0;nsid=1805451571;c=0)
        at
org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.registerDatanode(DatanodeManager.java:629)
        at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.registerDatanode(FSNamesystem.java:3459)
        at
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.registerDatanode(NameNodeRpcServer.java:881)
        at
org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolServerSideTranslatorPB.registerDatanode(DatanodeProtocolServerSideTranslatorPB.java:90)
        at
org.apache.hadoop.hdfs.protocol.proto.DatanodeProtocolProtos$DatanodeProtocolService$2.callBlockingMethod(DatanodeProtocolProtos.java:18295)
        at org.apache.hadoop.ipc.Protob


On Wed, Feb 20, 2013 at 9:10 PM, Jean-Marc Spaggiari <
jean-marc@spaggiari.org> wrote:

> Hi Nagarjuna,
>
> Is it a test cluster? Do you have another cluster running close-by?
> Also, is it your first try?
>
> It seems there is some previous data in the dfs directory which is not
> in sync with the last installation.
>
> Maybe you can remove the content of
> /Users/nagarjunak/Documents/hadoop-install/hadoop-2.0.3-alpha/tmp_20
> if it's not usefull for you, reformat your node and restart it?
>
> JM
>
> 2013/2/20, nagarjuna kanamarlapudi <na...@gmail.com>:
> > Hi,
> >
> > I am trying to setup single node cluster of hadop 2.0.*
> >
> > When trying to start datanode I got the following error. Could anyone
> help
> > me out
> >
> > Block pool BP-1894309265-124.123.215.187-1361374377471 (storage id
> > DS-1175433225-124.123.215.187-50010-1361374235895) service to nagarjuna/
> > 124.123.215.187:9000
> > java.io.IOException: Incompatible clusterIDs in
> >
> /Users/nagarjunak/Documents/hadoop-install/hadoop-2.0.3-alpha/tmp_20/dfs/data:
> > namenode clusterID = CID-800b7eb1-7a83-4649-86b7-617913e82ad8; datanode
> > clusterID = CID-1740b490-8413-451c-926f-2f0676b217ec
> >         at
> >
> org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:391)
> >         at
> >
> org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:191)
> >         at
> >
> org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:219)
> >         at
> >
> org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:850)
> >         at
> >
> org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:821)
> >         at
> >
> org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:280)
> >         at
> >
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:222)
> >         at
> >
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:664)
> >         at java.lang.Thread.run(Thread.java:680)
> > 2013-02-20 21:03:39,856 WARN
> > org.apache.hadoop.hdfs.server.datanode.DataNode: Ending block pool
> service
> > for: Block pool BP-1894309265-124.123.215.187-1361374377471 (storage id
> > DS-1175433225-124.123.215.187-50010-1361374235895) service to nagarjuna/
> > 124.123.215.187:9000
> > 2013-02-20 21:03:39,958 INFO
> > org.apache.hadoop.hdfs.server.datanode.DataNode: Removed Block pool
> > BP-1894309265-124.123.215.187-1361374377471 (storage id
> > DS-1175433225-124.123.215.187-50010-1361374235895)
> > 2013-02-20 21:03:41,959 WARN
> > org.apache.hadoop.hdfs.server.datanode.DataNode: Exiting Datanode
> > 2013-02-20 21:03:41,961 INFO org.apache.hadoop.util.ExitUtil: Exiting
> with
> > status 0
> > 2013-02-20 21:03:41,963 INFO
> > org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
> > /************************************************************
> > SHUTDOWN_MSG: Shutting down DataNode at nagarjuna/124.123.215.187
> > ************************************************************/
> >
>

Re: In Compatible clusterIDs

Posted by nagarjuna kanamarlapudi <na...@gmail.com>.
Hi Jean Marc,

Yes, this is the cluster I am trying  to create and then will scale up.

As per your suggestion I deleted the folder /Users/nagarjunak/Documents/
hadoop-install/hadoop-2.0.3-alpha/tmp_20 an formatted the cluster.


Now I get the following error.


2013-02-20 21:17:25,668 FATAL
org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for
block pool Block pool BP-811644675-124.123.215.187-1361375214801 (storage
id DS-1515823288-124.123.215.187-50010-1361375245435) service to nagarjuna/
124.123.215.187:9000
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.protocol.DisallowedDatanodeException):
Datanode denied communication with namenode: DatanodeRegistration(0.0.0.0,
storageID=DS-1515823288-124.123.215.187-50010-1361375245435,
infoPort=50075, ipcPort=50020,
storageInfo=lv=-40;cid=CID-723b02a7-3441-41b5-8045-2a45a9cf96b0;nsid=1805451571;c=0)
        at
org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.registerDatanode(DatanodeManager.java:629)
        at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.registerDatanode(FSNamesystem.java:3459)
        at
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.registerDatanode(NameNodeRpcServer.java:881)
        at
org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolServerSideTranslatorPB.registerDatanode(DatanodeProtocolServerSideTranslatorPB.java:90)
        at
org.apache.hadoop.hdfs.protocol.proto.DatanodeProtocolProtos$DatanodeProtocolService$2.callBlockingMethod(DatanodeProtocolProtos.java:18295)
        at org.apache.hadoop.ipc.Protob


On Wed, Feb 20, 2013 at 9:10 PM, Jean-Marc Spaggiari <
jean-marc@spaggiari.org> wrote:

> Hi Nagarjuna,
>
> Is it a test cluster? Do you have another cluster running close-by?
> Also, is it your first try?
>
> It seems there is some previous data in the dfs directory which is not
> in sync with the last installation.
>
> Maybe you can remove the content of
> /Users/nagarjunak/Documents/hadoop-install/hadoop-2.0.3-alpha/tmp_20
> if it's not usefull for you, reformat your node and restart it?
>
> JM
>
> 2013/2/20, nagarjuna kanamarlapudi <na...@gmail.com>:
> > Hi,
> >
> > I am trying to setup single node cluster of hadop 2.0.*
> >
> > When trying to start datanode I got the following error. Could anyone
> help
> > me out
> >
> > Block pool BP-1894309265-124.123.215.187-1361374377471 (storage id
> > DS-1175433225-124.123.215.187-50010-1361374235895) service to nagarjuna/
> > 124.123.215.187:9000
> > java.io.IOException: Incompatible clusterIDs in
> >
> /Users/nagarjunak/Documents/hadoop-install/hadoop-2.0.3-alpha/tmp_20/dfs/data:
> > namenode clusterID = CID-800b7eb1-7a83-4649-86b7-617913e82ad8; datanode
> > clusterID = CID-1740b490-8413-451c-926f-2f0676b217ec
> >         at
> >
> org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:391)
> >         at
> >
> org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:191)
> >         at
> >
> org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:219)
> >         at
> >
> org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:850)
> >         at
> >
> org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:821)
> >         at
> >
> org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:280)
> >         at
> >
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:222)
> >         at
> >
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:664)
> >         at java.lang.Thread.run(Thread.java:680)
> > 2013-02-20 21:03:39,856 WARN
> > org.apache.hadoop.hdfs.server.datanode.DataNode: Ending block pool
> service
> > for: Block pool BP-1894309265-124.123.215.187-1361374377471 (storage id
> > DS-1175433225-124.123.215.187-50010-1361374235895) service to nagarjuna/
> > 124.123.215.187:9000
> > 2013-02-20 21:03:39,958 INFO
> > org.apache.hadoop.hdfs.server.datanode.DataNode: Removed Block pool
> > BP-1894309265-124.123.215.187-1361374377471 (storage id
> > DS-1175433225-124.123.215.187-50010-1361374235895)
> > 2013-02-20 21:03:41,959 WARN
> > org.apache.hadoop.hdfs.server.datanode.DataNode: Exiting Datanode
> > 2013-02-20 21:03:41,961 INFO org.apache.hadoop.util.ExitUtil: Exiting
> with
> > status 0
> > 2013-02-20 21:03:41,963 INFO
> > org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
> > /************************************************************
> > SHUTDOWN_MSG: Shutting down DataNode at nagarjuna/124.123.215.187
> > ************************************************************/
> >
>

Re: In Compatible clusterIDs

Posted by nagarjuna kanamarlapudi <na...@gmail.com>.
Hi Jean Marc,

Yes, this is the cluster I am trying  to create and then will scale up.

As per your suggestion I deleted the folder /Users/nagarjunak/Documents/
hadoop-install/hadoop-2.0.3-alpha/tmp_20 an formatted the cluster.


Now I get the following error.


2013-02-20 21:17:25,668 FATAL
org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for
block pool Block pool BP-811644675-124.123.215.187-1361375214801 (storage
id DS-1515823288-124.123.215.187-50010-1361375245435) service to nagarjuna/
124.123.215.187:9000
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.protocol.DisallowedDatanodeException):
Datanode denied communication with namenode: DatanodeRegistration(0.0.0.0,
storageID=DS-1515823288-124.123.215.187-50010-1361375245435,
infoPort=50075, ipcPort=50020,
storageInfo=lv=-40;cid=CID-723b02a7-3441-41b5-8045-2a45a9cf96b0;nsid=1805451571;c=0)
        at
org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.registerDatanode(DatanodeManager.java:629)
        at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.registerDatanode(FSNamesystem.java:3459)
        at
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.registerDatanode(NameNodeRpcServer.java:881)
        at
org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolServerSideTranslatorPB.registerDatanode(DatanodeProtocolServerSideTranslatorPB.java:90)
        at
org.apache.hadoop.hdfs.protocol.proto.DatanodeProtocolProtos$DatanodeProtocolService$2.callBlockingMethod(DatanodeProtocolProtos.java:18295)
        at org.apache.hadoop.ipc.Protob


On Wed, Feb 20, 2013 at 9:10 PM, Jean-Marc Spaggiari <
jean-marc@spaggiari.org> wrote:

> Hi Nagarjuna,
>
> Is it a test cluster? Do you have another cluster running close-by?
> Also, is it your first try?
>
> It seems there is some previous data in the dfs directory which is not
> in sync with the last installation.
>
> Maybe you can remove the content of
> /Users/nagarjunak/Documents/hadoop-install/hadoop-2.0.3-alpha/tmp_20
> if it's not usefull for you, reformat your node and restart it?
>
> JM
>
> 2013/2/20, nagarjuna kanamarlapudi <na...@gmail.com>:
> > Hi,
> >
> > I am trying to setup single node cluster of hadop 2.0.*
> >
> > When trying to start datanode I got the following error. Could anyone
> help
> > me out
> >
> > Block pool BP-1894309265-124.123.215.187-1361374377471 (storage id
> > DS-1175433225-124.123.215.187-50010-1361374235895) service to nagarjuna/
> > 124.123.215.187:9000
> > java.io.IOException: Incompatible clusterIDs in
> >
> /Users/nagarjunak/Documents/hadoop-install/hadoop-2.0.3-alpha/tmp_20/dfs/data:
> > namenode clusterID = CID-800b7eb1-7a83-4649-86b7-617913e82ad8; datanode
> > clusterID = CID-1740b490-8413-451c-926f-2f0676b217ec
> >         at
> >
> org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:391)
> >         at
> >
> org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:191)
> >         at
> >
> org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:219)
> >         at
> >
> org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:850)
> >         at
> >
> org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:821)
> >         at
> >
> org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:280)
> >         at
> >
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:222)
> >         at
> >
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:664)
> >         at java.lang.Thread.run(Thread.java:680)
> > 2013-02-20 21:03:39,856 WARN
> > org.apache.hadoop.hdfs.server.datanode.DataNode: Ending block pool
> service
> > for: Block pool BP-1894309265-124.123.215.187-1361374377471 (storage id
> > DS-1175433225-124.123.215.187-50010-1361374235895) service to nagarjuna/
> > 124.123.215.187:9000
> > 2013-02-20 21:03:39,958 INFO
> > org.apache.hadoop.hdfs.server.datanode.DataNode: Removed Block pool
> > BP-1894309265-124.123.215.187-1361374377471 (storage id
> > DS-1175433225-124.123.215.187-50010-1361374235895)
> > 2013-02-20 21:03:41,959 WARN
> > org.apache.hadoop.hdfs.server.datanode.DataNode: Exiting Datanode
> > 2013-02-20 21:03:41,961 INFO org.apache.hadoop.util.ExitUtil: Exiting
> with
> > status 0
> > 2013-02-20 21:03:41,963 INFO
> > org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
> > /************************************************************
> > SHUTDOWN_MSG: Shutting down DataNode at nagarjuna/124.123.215.187
> > ************************************************************/
> >
>

Re: In Compatible clusterIDs

Posted by Jean-Marc Spaggiari <je...@spaggiari.org>.
Hi Nagarjuna,

Is it a test cluster? Do you have another cluster running close-by?
Also, is it your first try?

It seems there is some previous data in the dfs directory which is not
in sync with the last installation.

Maybe you can remove the content of
/Users/nagarjunak/Documents/hadoop-install/hadoop-2.0.3-alpha/tmp_20
if it's not usefull for you, reformat your node and restart it?

JM

2013/2/20, nagarjuna kanamarlapudi <na...@gmail.com>:
> Hi,
>
> I am trying to setup single node cluster of hadop 2.0.*
>
> When trying to start datanode I got the following error. Could anyone help
> me out
>
> Block pool BP-1894309265-124.123.215.187-1361374377471 (storage id
> DS-1175433225-124.123.215.187-50010-1361374235895) service to nagarjuna/
> 124.123.215.187:9000
> java.io.IOException: Incompatible clusterIDs in
> /Users/nagarjunak/Documents/hadoop-install/hadoop-2.0.3-alpha/tmp_20/dfs/data:
> namenode clusterID = CID-800b7eb1-7a83-4649-86b7-617913e82ad8; datanode
> clusterID = CID-1740b490-8413-451c-926f-2f0676b217ec
>         at
> org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:391)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:191)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:219)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:850)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:821)
>         at
> org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:280)
>         at
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:222)
>         at
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:664)
>         at java.lang.Thread.run(Thread.java:680)
> 2013-02-20 21:03:39,856 WARN
> org.apache.hadoop.hdfs.server.datanode.DataNode: Ending block pool service
> for: Block pool BP-1894309265-124.123.215.187-1361374377471 (storage id
> DS-1175433225-124.123.215.187-50010-1361374235895) service to nagarjuna/
> 124.123.215.187:9000
> 2013-02-20 21:03:39,958 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Removed Block pool
> BP-1894309265-124.123.215.187-1361374377471 (storage id
> DS-1175433225-124.123.215.187-50010-1361374235895)
> 2013-02-20 21:03:41,959 WARN
> org.apache.hadoop.hdfs.server.datanode.DataNode: Exiting Datanode
> 2013-02-20 21:03:41,961 INFO org.apache.hadoop.util.ExitUtil: Exiting with
> status 0
> 2013-02-20 21:03:41,963 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
> /************************************************************
> SHUTDOWN_MSG: Shutting down DataNode at nagarjuna/124.123.215.187
> ************************************************************/
>

Re: In Compatible clusterIDs

Posted by Jean-Marc Spaggiari <je...@spaggiari.org>.
Hi Nagarjuna,

Is it a test cluster? Do you have another cluster running close-by?
Also, is it your first try?

It seems there is some previous data in the dfs directory which is not
in sync with the last installation.

Maybe you can remove the content of
/Users/nagarjunak/Documents/hadoop-install/hadoop-2.0.3-alpha/tmp_20
if it's not usefull for you, reformat your node and restart it?

JM

2013/2/20, nagarjuna kanamarlapudi <na...@gmail.com>:
> Hi,
>
> I am trying to setup single node cluster of hadop 2.0.*
>
> When trying to start datanode I got the following error. Could anyone help
> me out
>
> Block pool BP-1894309265-124.123.215.187-1361374377471 (storage id
> DS-1175433225-124.123.215.187-50010-1361374235895) service to nagarjuna/
> 124.123.215.187:9000
> java.io.IOException: Incompatible clusterIDs in
> /Users/nagarjunak/Documents/hadoop-install/hadoop-2.0.3-alpha/tmp_20/dfs/data:
> namenode clusterID = CID-800b7eb1-7a83-4649-86b7-617913e82ad8; datanode
> clusterID = CID-1740b490-8413-451c-926f-2f0676b217ec
>         at
> org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:391)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:191)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:219)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:850)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:821)
>         at
> org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:280)
>         at
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:222)
>         at
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:664)
>         at java.lang.Thread.run(Thread.java:680)
> 2013-02-20 21:03:39,856 WARN
> org.apache.hadoop.hdfs.server.datanode.DataNode: Ending block pool service
> for: Block pool BP-1894309265-124.123.215.187-1361374377471 (storage id
> DS-1175433225-124.123.215.187-50010-1361374235895) service to nagarjuna/
> 124.123.215.187:9000
> 2013-02-20 21:03:39,958 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Removed Block pool
> BP-1894309265-124.123.215.187-1361374377471 (storage id
> DS-1175433225-124.123.215.187-50010-1361374235895)
> 2013-02-20 21:03:41,959 WARN
> org.apache.hadoop.hdfs.server.datanode.DataNode: Exiting Datanode
> 2013-02-20 21:03:41,961 INFO org.apache.hadoop.util.ExitUtil: Exiting with
> status 0
> 2013-02-20 21:03:41,963 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
> /************************************************************
> SHUTDOWN_MSG: Shutting down DataNode at nagarjuna/124.123.215.187
> ************************************************************/
>

Re: In Compatible clusterIDs

Posted by Jean-Marc Spaggiari <je...@spaggiari.org>.
Hi Nagarjuna,

Is it a test cluster? Do you have another cluster running close-by?
Also, is it your first try?

It seems there is some previous data in the dfs directory which is not
in sync with the last installation.

Maybe you can remove the content of
/Users/nagarjunak/Documents/hadoop-install/hadoop-2.0.3-alpha/tmp_20
if it's not usefull for you, reformat your node and restart it?

JM

2013/2/20, nagarjuna kanamarlapudi <na...@gmail.com>:
> Hi,
>
> I am trying to setup single node cluster of hadop 2.0.*
>
> When trying to start datanode I got the following error. Could anyone help
> me out
>
> Block pool BP-1894309265-124.123.215.187-1361374377471 (storage id
> DS-1175433225-124.123.215.187-50010-1361374235895) service to nagarjuna/
> 124.123.215.187:9000
> java.io.IOException: Incompatible clusterIDs in
> /Users/nagarjunak/Documents/hadoop-install/hadoop-2.0.3-alpha/tmp_20/dfs/data:
> namenode clusterID = CID-800b7eb1-7a83-4649-86b7-617913e82ad8; datanode
> clusterID = CID-1740b490-8413-451c-926f-2f0676b217ec
>         at
> org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:391)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:191)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:219)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:850)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:821)
>         at
> org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:280)
>         at
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:222)
>         at
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:664)
>         at java.lang.Thread.run(Thread.java:680)
> 2013-02-20 21:03:39,856 WARN
> org.apache.hadoop.hdfs.server.datanode.DataNode: Ending block pool service
> for: Block pool BP-1894309265-124.123.215.187-1361374377471 (storage id
> DS-1175433225-124.123.215.187-50010-1361374235895) service to nagarjuna/
> 124.123.215.187:9000
> 2013-02-20 21:03:39,958 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Removed Block pool
> BP-1894309265-124.123.215.187-1361374377471 (storage id
> DS-1175433225-124.123.215.187-50010-1361374235895)
> 2013-02-20 21:03:41,959 WARN
> org.apache.hadoop.hdfs.server.datanode.DataNode: Exiting Datanode
> 2013-02-20 21:03:41,961 INFO org.apache.hadoop.util.ExitUtil: Exiting with
> status 0
> 2013-02-20 21:03:41,963 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
> /************************************************************
> SHUTDOWN_MSG: Shutting down DataNode at nagarjuna/124.123.215.187
> ************************************************************/
>

Re: In Compatible clusterIDs

Posted by Jean-Marc Spaggiari <je...@spaggiari.org>.
Hi Nagarjuna,

Is it a test cluster? Do you have another cluster running close-by?
Also, is it your first try?

It seems there is some previous data in the dfs directory which is not
in sync with the last installation.

Maybe you can remove the content of
/Users/nagarjunak/Documents/hadoop-install/hadoop-2.0.3-alpha/tmp_20
if it's not usefull for you, reformat your node and restart it?

JM

2013/2/20, nagarjuna kanamarlapudi <na...@gmail.com>:
> Hi,
>
> I am trying to setup single node cluster of hadop 2.0.*
>
> When trying to start datanode I got the following error. Could anyone help
> me out
>
> Block pool BP-1894309265-124.123.215.187-1361374377471 (storage id
> DS-1175433225-124.123.215.187-50010-1361374235895) service to nagarjuna/
> 124.123.215.187:9000
> java.io.IOException: Incompatible clusterIDs in
> /Users/nagarjunak/Documents/hadoop-install/hadoop-2.0.3-alpha/tmp_20/dfs/data:
> namenode clusterID = CID-800b7eb1-7a83-4649-86b7-617913e82ad8; datanode
> clusterID = CID-1740b490-8413-451c-926f-2f0676b217ec
>         at
> org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:391)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:191)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:219)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:850)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:821)
>         at
> org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:280)
>         at
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:222)
>         at
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:664)
>         at java.lang.Thread.run(Thread.java:680)
> 2013-02-20 21:03:39,856 WARN
> org.apache.hadoop.hdfs.server.datanode.DataNode: Ending block pool service
> for: Block pool BP-1894309265-124.123.215.187-1361374377471 (storage id
> DS-1175433225-124.123.215.187-50010-1361374235895) service to nagarjuna/
> 124.123.215.187:9000
> 2013-02-20 21:03:39,958 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Removed Block pool
> BP-1894309265-124.123.215.187-1361374377471 (storage id
> DS-1175433225-124.123.215.187-50010-1361374235895)
> 2013-02-20 21:03:41,959 WARN
> org.apache.hadoop.hdfs.server.datanode.DataNode: Exiting Datanode
> 2013-02-20 21:03:41,961 INFO org.apache.hadoop.util.ExitUtil: Exiting with
> status 0
> 2013-02-20 21:03:41,963 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
> /************************************************************
> SHUTDOWN_MSG: Shutting down DataNode at nagarjuna/124.123.215.187
> ************************************************************/
>