You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-user@hadoop.apache.org by Felipe Gutierrez <fe...@gmail.com> on 2013/08/07 18:59:47 UTC
Datanode doesn't connect to Namenode
Hi everyone,
My slave machine (cloud15) the datanode shows this log. It doesn't connect
to the master (cloud6).
2013-08-07 13:44:03,110 INFO org.apache.hadoop.ipc.Client: Retrying connect
to server: cloud15/192.168.188.15:54310. Already tried 9 time(s); retry
policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1
SECONDS)
2013-08-07 13:44:03,110 INFO org.apache.hadoop.ipc.RPC: Server at cloud15/
192.168.188.15:54310 not available yet, Zzzzz...
But when I type jps command on slave machine DataNode is running. This is
my file core-site.xml in slave machine (cloud15):
<configuration>
<property>
<name>hadoop.tmp.dir</name>
<value>/app/hadoop/tmp</value>
<description>A base for other temporary directories.</description>
</property>
<property>
<name>fs.default.name</name>
<value>hdfs://cloud15:54310</value>
<description>The name of the default file system. A URI whose
scheme and authority determine the FileSystem implementation. The
uri's scheme determines the config property (fs.SCHEME.impl) naming
the FileSystem implementation class. The uri's authority is used to
determine the host, port, etc. for a filesystem.</description>
</property>
</configuration>
In the master machine I just swap cloud15 to cloud6.
In the file /etc/host I have (192.168.188.15 cloud15) and (192.168.188.6
cloud6) lines, and both machines access through ssh with out password.
Am I missing anything?
Thanks in advance!
Felipe
--
*--
-- Felipe Oliveira Gutierrez
-- Felipe.o.Gutierrez@gmail.com
-- https://sites.google.com/site/lipe82/Home/diaadia*
Re: Datanode doesn't connect to Namenode
Posted by Felipe Gutierrez <fe...@gmail.com>.
Thanks for the hints Shekhar.
My cluster is running well.
Felipe
On Thu, Aug 8, 2013 at 8:56 AM, Shekhar Sharma <sh...@gmail.com>wrote:
> keep the configuration same in the datanodes as well for the time
> being..Only thing that data node or slave machine should know is Masters
> files ( that means who is the master)
> and you need to tell the slave machine where is your namenode running,
> which you need to specify in the property fs.default.name and also you
> need to tell where is your job tracker running which you need to specify by
> the property mapred.job.tracker
>
> Hope you might be able to bring up your cluster now..
>
> if you still face the issues, you can follow my blog
> http://ksssblogs.blogspot.in/2013/07/multi-node-hadoop-cluster-set-using-vms.html
> Regards,
> Som Shekhar Sharma
> +91-8197243810
>
>
> On Thu, Aug 8, 2013 at 5:21 PM, Shekhar Sharma <sh...@gmail.com>wrote:
>
>> if you have removed this property from the slave machines then your DN
>> information will be created under /tmp folder and once you reboot your data
>> node machines, the information will be lost..
>>
>> Sorry i have not seen the logs..but you dont have play around the
>> properties..
>> ...see datanode will not come up in scenario, where it is not able to
>> send the heart beat signal to the name node at port 54310
>>
>>
>>
>>
>> Do step by step :
>>
>> Check whether you can ping every machine and you can do SSH in password
>> less manner
>>
>> Lets say i have one master machine whose hostname is *Master* and i have
>> two slave machines *Slave0* and *Slave1* ( i am assuming the OS used are
>> CentOS)
>>
>> In *master Machine* do the following things:
>>
>> *First disable the firewall by running the following command:
>> *as a root user run the following commands
>> service iptables save
>> service iptables stop
>> chkconfig iptables off
>>
>> specify the following properties in the corresponding files
>>
>> *mapred-site.xml*
>>
>> - mapred.job.tracker (Master:54311)
>>
>> *core-site.xml*
>>
>> - fs.default.name (hdfs://Master:54310)
>> - hadoop.tmp.dir (choose some persistent directory)
>>
>> *hdfs-site.xml*
>>
>> - dfs.replication (3)
>> - dfs.block.size(64MB)
>>
>> *Masters file*
>>
>> - Master
>>
>> *Slaves file*
>>
>> - Slave0
>> - Slave1
>>
>> *hadoop-env.sh*
>>
>> - export JAVA_HOME=<Your java home directory>
>>
>>
>> In *slave0 machine
>> *
>>
>> - Disable the firewall
>> - Same properties as you did in Masters machine
>>
>> In *slave 1 machine*
>>
>> - Disable the firewall
>> - Same properties as you did in Master machine
>>
>>
>> Once you start the cluster by running the command start-all.sh, check the
>> ports 54310 and 54311 got opened by running the command "netstat
>> -tuplen"..it will show whether ports are opened or not
>>
>> Regards,
>> Som Shekhar Sharma
>> +91-8197243810
>>
>>
>> On Thu, Aug 8, 2013 at 4:57 PM, Felipe Gutierrez <
>> felipe.o.gutierrez@gmail.com> wrote:
>>
>>> Thanks,
>>> at all files I changed to master (cloud6) and I take off this property
>>> <name>hadoop.tmp.dir</name>.
>>>
>>> Felipe
>>>
>>>
>>> On Wed, Aug 7, 2013 at 3:20 PM, Shekhar Sharma <sh...@gmail.com>wrote:
>>>
>>>> Disable the firewall on data node and namenode machines..
>>>> Regards,
>>>> Som Shekhar Sharma
>>>> +91-8197243810
>>>>
>>>>
>>>> On Wed, Aug 7, 2013 at 11:33 PM, Jitendra Yadav <
>>>> jeetuyadav200890@gmail.com> wrote:
>>>>
>>>>> Your hdfs name entry should be same on master and databnodes
>>>>>
>>>>> * <name>fs.default.name</name>*
>>>>> *<value>hdfs://cloud6:54310</value>*
>>>>>
>>>>> Thanks
>>>>> On Wed, Aug 7, 2013 at 11:05 PM, Felipe Gutierrez <
>>>>> felipe.o.gutierrez@gmail.com> wrote:
>>>>>
>>>>>> on my slave the process is running:
>>>>>> hduser@cloud15:/usr/local/hadoop$ jps
>>>>>> 19025 DataNode
>>>>>> 19092 Jps
>>>>>>
>>>>>>
>>>>>> On Wed, Aug 7, 2013 at 2:26 PM, Jitendra Yadav <
>>>>>> jeetuyadav200890@gmail.com> wrote:
>>>>>>
>>>>>>> Hi,
>>>>>>>
>>>>>>> Your logs showing that the process is creating IPC call not for
>>>>>>> namenode, it is hitting datanode itself.
>>>>>>>
>>>>>>> Check you please check you datanode processes status?.
>>>>>>>
>>>>>>> Regards
>>>>>>> Jitendra
>>>>>>>
>>>>>>> On Wed, Aug 7, 2013 at 10:29 PM, Felipe Gutierrez <
>>>>>>> felipe.o.gutierrez@gmail.com> wrote:
>>>>>>>
>>>>>>>> Hi everyone,
>>>>>>>>
>>>>>>>> My slave machine (cloud15) the datanode shows this log. It doesn't
>>>>>>>> connect to the master (cloud6).
>>>>>>>>
>>>>>>>> 2013-08-07 13:44:03,110 INFO org.apache.hadoop.ipc.Client:
>>>>>>>> Retrying connect to server: cloud15/192.168.188.15:54310. Already
>>>>>>>> tried 9 time(s); retry policy is
>>>>>>>> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
>>>>>>>> 2013-08-07 13:44:03,110 INFO org.apache.hadoop.ipc.RPC: Server at
>>>>>>>> cloud15/192.168.188.15:54310 not available yet, Zzzzz...
>>>>>>>>
>>>>>>>> But when I type jps command on slave machine DataNode is running.
>>>>>>>> This is my file core-site.xml in slave machine (cloud15):
>>>>>>>> <configuration>
>>>>>>>> <property>
>>>>>>>> <name>hadoop.tmp.dir</name>
>>>>>>>> <value>/app/hadoop/tmp</value>
>>>>>>>> <description>A base for other temporary directories.</description>
>>>>>>>> </property>
>>>>>>>>
>>>>>>>> <property>
>>>>>>>> <name>fs.default.name</name>
>>>>>>>> <value>hdfs://cloud15:54310</value>
>>>>>>>> <description>The name of the default file system. A URI whose
>>>>>>>> scheme and authority determine the FileSystem implementation. The
>>>>>>>> uri's scheme determines the config property (fs.SCHEME.impl)
>>>>>>>> naming
>>>>>>>> the FileSystem implementation class. The uri's authority is used
>>>>>>>> to
>>>>>>>> determine the host, port, etc. for a filesystem.</description>
>>>>>>>> </property>
>>>>>>>> </configuration>
>>>>>>>>
>>>>>>>> In the master machine I just swap cloud15 to cloud6.
>>>>>>>> In the file /etc/host I have (192.168.188.15 cloud15) and
>>>>>>>> (192.168.188.6 cloud6) lines, and both machines access through ssh with
>>>>>>>> out password.
>>>>>>>>
>>>>>>>> Am I missing anything?
>>>>>>>>
>>>>>>>> Thanks in advance!
>>>>>>>> Felipe
>>>>>>>>
>>>>>>>>
>>>>>>>> --
>>>>>>>> *--
>>>>>>>> -- Felipe Oliveira Gutierrez
>>>>>>>> -- Felipe.o.Gutierrez@gmail.com
>>>>>>>> -- https://sites.google.com/site/lipe82/Home/diaadia*
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>>> *--
>>>>>> -- Felipe Oliveira Gutierrez
>>>>>> -- Felipe.o.Gutierrez@gmail.com
>>>>>> -- https://sites.google.com/site/lipe82/Home/diaadia*
>>>>>>
>>>>>
>>>>>
>>>>
>>>
>>>
>>> --
>>> *--
>>> -- Felipe Oliveira Gutierrez
>>> -- Felipe.o.Gutierrez@gmail.com
>>> -- https://sites.google.com/site/lipe82/Home/diaadia*
>>>
>>
>>
>
--
*--
-- Felipe Oliveira Gutierrez
-- Felipe.o.Gutierrez@gmail.com
-- https://sites.google.com/site/lipe82/Home/diaadia*
Re: Datanode doesn't connect to Namenode
Posted by Felipe Gutierrez <fe...@gmail.com>.
Thanks for the hints Shekhar.
My cluster is running well.
Felipe
On Thu, Aug 8, 2013 at 8:56 AM, Shekhar Sharma <sh...@gmail.com>wrote:
> keep the configuration same in the datanodes as well for the time
> being..Only thing that data node or slave machine should know is Masters
> files ( that means who is the master)
> and you need to tell the slave machine where is your namenode running,
> which you need to specify in the property fs.default.name and also you
> need to tell where is your job tracker running which you need to specify by
> the property mapred.job.tracker
>
> Hope you might be able to bring up your cluster now..
>
> if you still face the issues, you can follow my blog
> http://ksssblogs.blogspot.in/2013/07/multi-node-hadoop-cluster-set-using-vms.html
> Regards,
> Som Shekhar Sharma
> +91-8197243810
>
>
> On Thu, Aug 8, 2013 at 5:21 PM, Shekhar Sharma <sh...@gmail.com>wrote:
>
>> if you have removed this property from the slave machines then your DN
>> information will be created under /tmp folder and once you reboot your data
>> node machines, the information will be lost..
>>
>> Sorry i have not seen the logs..but you dont have play around the
>> properties..
>> ...see datanode will not come up in scenario, where it is not able to
>> send the heart beat signal to the name node at port 54310
>>
>>
>>
>>
>> Do step by step :
>>
>> Check whether you can ping every machine and you can do SSH in password
>> less manner
>>
>> Lets say i have one master machine whose hostname is *Master* and i have
>> two slave machines *Slave0* and *Slave1* ( i am assuming the OS used are
>> CentOS)
>>
>> In *master Machine* do the following things:
>>
>> *First disable the firewall by running the following command:
>> *as a root user run the following commands
>> service iptables save
>> service iptables stop
>> chkconfig iptables off
>>
>> specify the following properties in the corresponding files
>>
>> *mapred-site.xml*
>>
>> - mapred.job.tracker (Master:54311)
>>
>> *core-site.xml*
>>
>> - fs.default.name (hdfs://Master:54310)
>> - hadoop.tmp.dir (choose some persistent directory)
>>
>> *hdfs-site.xml*
>>
>> - dfs.replication (3)
>> - dfs.block.size(64MB)
>>
>> *Masters file*
>>
>> - Master
>>
>> *Slaves file*
>>
>> - Slave0
>> - Slave1
>>
>> *hadoop-env.sh*
>>
>> - export JAVA_HOME=<Your java home directory>
>>
>>
>> In *slave0 machine
>> *
>>
>> - Disable the firewall
>> - Same properties as you did in Masters machine
>>
>> In *slave 1 machine*
>>
>> - Disable the firewall
>> - Same properties as you did in Master machine
>>
>>
>> Once you start the cluster by running the command start-all.sh, check the
>> ports 54310 and 54311 got opened by running the command "netstat
>> -tuplen"..it will show whether ports are opened or not
>>
>> Regards,
>> Som Shekhar Sharma
>> +91-8197243810
>>
>>
>> On Thu, Aug 8, 2013 at 4:57 PM, Felipe Gutierrez <
>> felipe.o.gutierrez@gmail.com> wrote:
>>
>>> Thanks,
>>> at all files I changed to master (cloud6) and I take off this property
>>> <name>hadoop.tmp.dir</name>.
>>>
>>> Felipe
>>>
>>>
>>> On Wed, Aug 7, 2013 at 3:20 PM, Shekhar Sharma <sh...@gmail.com>wrote:
>>>
>>>> Disable the firewall on data node and namenode machines..
>>>> Regards,
>>>> Som Shekhar Sharma
>>>> +91-8197243810
>>>>
>>>>
>>>> On Wed, Aug 7, 2013 at 11:33 PM, Jitendra Yadav <
>>>> jeetuyadav200890@gmail.com> wrote:
>>>>
>>>>> Your hdfs name entry should be same on master and databnodes
>>>>>
>>>>> * <name>fs.default.name</name>*
>>>>> *<value>hdfs://cloud6:54310</value>*
>>>>>
>>>>> Thanks
>>>>> On Wed, Aug 7, 2013 at 11:05 PM, Felipe Gutierrez <
>>>>> felipe.o.gutierrez@gmail.com> wrote:
>>>>>
>>>>>> on my slave the process is running:
>>>>>> hduser@cloud15:/usr/local/hadoop$ jps
>>>>>> 19025 DataNode
>>>>>> 19092 Jps
>>>>>>
>>>>>>
>>>>>> On Wed, Aug 7, 2013 at 2:26 PM, Jitendra Yadav <
>>>>>> jeetuyadav200890@gmail.com> wrote:
>>>>>>
>>>>>>> Hi,
>>>>>>>
>>>>>>> Your logs showing that the process is creating IPC call not for
>>>>>>> namenode, it is hitting datanode itself.
>>>>>>>
>>>>>>> Check you please check you datanode processes status?.
>>>>>>>
>>>>>>> Regards
>>>>>>> Jitendra
>>>>>>>
>>>>>>> On Wed, Aug 7, 2013 at 10:29 PM, Felipe Gutierrez <
>>>>>>> felipe.o.gutierrez@gmail.com> wrote:
>>>>>>>
>>>>>>>> Hi everyone,
>>>>>>>>
>>>>>>>> My slave machine (cloud15) the datanode shows this log. It doesn't
>>>>>>>> connect to the master (cloud6).
>>>>>>>>
>>>>>>>> 2013-08-07 13:44:03,110 INFO org.apache.hadoop.ipc.Client:
>>>>>>>> Retrying connect to server: cloud15/192.168.188.15:54310. Already
>>>>>>>> tried 9 time(s); retry policy is
>>>>>>>> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
>>>>>>>> 2013-08-07 13:44:03,110 INFO org.apache.hadoop.ipc.RPC: Server at
>>>>>>>> cloud15/192.168.188.15:54310 not available yet, Zzzzz...
>>>>>>>>
>>>>>>>> But when I type jps command on slave machine DataNode is running.
>>>>>>>> This is my file core-site.xml in slave machine (cloud15):
>>>>>>>> <configuration>
>>>>>>>> <property>
>>>>>>>> <name>hadoop.tmp.dir</name>
>>>>>>>> <value>/app/hadoop/tmp</value>
>>>>>>>> <description>A base for other temporary directories.</description>
>>>>>>>> </property>
>>>>>>>>
>>>>>>>> <property>
>>>>>>>> <name>fs.default.name</name>
>>>>>>>> <value>hdfs://cloud15:54310</value>
>>>>>>>> <description>The name of the default file system. A URI whose
>>>>>>>> scheme and authority determine the FileSystem implementation. The
>>>>>>>> uri's scheme determines the config property (fs.SCHEME.impl)
>>>>>>>> naming
>>>>>>>> the FileSystem implementation class. The uri's authority is used
>>>>>>>> to
>>>>>>>> determine the host, port, etc. for a filesystem.</description>
>>>>>>>> </property>
>>>>>>>> </configuration>
>>>>>>>>
>>>>>>>> In the master machine I just swap cloud15 to cloud6.
>>>>>>>> In the file /etc/host I have (192.168.188.15 cloud15) and
>>>>>>>> (192.168.188.6 cloud6) lines, and both machines access through ssh with
>>>>>>>> out password.
>>>>>>>>
>>>>>>>> Am I missing anything?
>>>>>>>>
>>>>>>>> Thanks in advance!
>>>>>>>> Felipe
>>>>>>>>
>>>>>>>>
>>>>>>>> --
>>>>>>>> *--
>>>>>>>> -- Felipe Oliveira Gutierrez
>>>>>>>> -- Felipe.o.Gutierrez@gmail.com
>>>>>>>> -- https://sites.google.com/site/lipe82/Home/diaadia*
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>>> *--
>>>>>> -- Felipe Oliveira Gutierrez
>>>>>> -- Felipe.o.Gutierrez@gmail.com
>>>>>> -- https://sites.google.com/site/lipe82/Home/diaadia*
>>>>>>
>>>>>
>>>>>
>>>>
>>>
>>>
>>> --
>>> *--
>>> -- Felipe Oliveira Gutierrez
>>> -- Felipe.o.Gutierrez@gmail.com
>>> -- https://sites.google.com/site/lipe82/Home/diaadia*
>>>
>>
>>
>
--
*--
-- Felipe Oliveira Gutierrez
-- Felipe.o.Gutierrez@gmail.com
-- https://sites.google.com/site/lipe82/Home/diaadia*
Re: Datanode doesn't connect to Namenode
Posted by Felipe Gutierrez <fe...@gmail.com>.
Thanks for the hints Shekhar.
My cluster is running well.
Felipe
On Thu, Aug 8, 2013 at 8:56 AM, Shekhar Sharma <sh...@gmail.com>wrote:
> keep the configuration same in the datanodes as well for the time
> being..Only thing that data node or slave machine should know is Masters
> files ( that means who is the master)
> and you need to tell the slave machine where is your namenode running,
> which you need to specify in the property fs.default.name and also you
> need to tell where is your job tracker running which you need to specify by
> the property mapred.job.tracker
>
> Hope you might be able to bring up your cluster now..
>
> if you still face the issues, you can follow my blog
> http://ksssblogs.blogspot.in/2013/07/multi-node-hadoop-cluster-set-using-vms.html
> Regards,
> Som Shekhar Sharma
> +91-8197243810
>
>
> On Thu, Aug 8, 2013 at 5:21 PM, Shekhar Sharma <sh...@gmail.com>wrote:
>
>> if you have removed this property from the slave machines then your DN
>> information will be created under /tmp folder and once you reboot your data
>> node machines, the information will be lost..
>>
>> Sorry i have not seen the logs..but you dont have play around the
>> properties..
>> ...see datanode will not come up in scenario, where it is not able to
>> send the heart beat signal to the name node at port 54310
>>
>>
>>
>>
>> Do step by step :
>>
>> Check whether you can ping every machine and you can do SSH in password
>> less manner
>>
>> Lets say i have one master machine whose hostname is *Master* and i have
>> two slave machines *Slave0* and *Slave1* ( i am assuming the OS used are
>> CentOS)
>>
>> In *master Machine* do the following things:
>>
>> *First disable the firewall by running the following command:
>> *as a root user run the following commands
>> service iptables save
>> service iptables stop
>> chkconfig iptables off
>>
>> specify the following properties in the corresponding files
>>
>> *mapred-site.xml*
>>
>> - mapred.job.tracker (Master:54311)
>>
>> *core-site.xml*
>>
>> - fs.default.name (hdfs://Master:54310)
>> - hadoop.tmp.dir (choose some persistent directory)
>>
>> *hdfs-site.xml*
>>
>> - dfs.replication (3)
>> - dfs.block.size(64MB)
>>
>> *Masters file*
>>
>> - Master
>>
>> *Slaves file*
>>
>> - Slave0
>> - Slave1
>>
>> *hadoop-env.sh*
>>
>> - export JAVA_HOME=<Your java home directory>
>>
>>
>> In *slave0 machine
>> *
>>
>> - Disable the firewall
>> - Same properties as you did in Masters machine
>>
>> In *slave 1 machine*
>>
>> - Disable the firewall
>> - Same properties as you did in Master machine
>>
>>
>> Once you start the cluster by running the command start-all.sh, check the
>> ports 54310 and 54311 got opened by running the command "netstat
>> -tuplen"..it will show whether ports are opened or not
>>
>> Regards,
>> Som Shekhar Sharma
>> +91-8197243810
>>
>>
>> On Thu, Aug 8, 2013 at 4:57 PM, Felipe Gutierrez <
>> felipe.o.gutierrez@gmail.com> wrote:
>>
>>> Thanks,
>>> at all files I changed to master (cloud6) and I take off this property
>>> <name>hadoop.tmp.dir</name>.
>>>
>>> Felipe
>>>
>>>
>>> On Wed, Aug 7, 2013 at 3:20 PM, Shekhar Sharma <sh...@gmail.com>wrote:
>>>
>>>> Disable the firewall on data node and namenode machines..
>>>> Regards,
>>>> Som Shekhar Sharma
>>>> +91-8197243810
>>>>
>>>>
>>>> On Wed, Aug 7, 2013 at 11:33 PM, Jitendra Yadav <
>>>> jeetuyadav200890@gmail.com> wrote:
>>>>
>>>>> Your hdfs name entry should be same on master and databnodes
>>>>>
>>>>> * <name>fs.default.name</name>*
>>>>> *<value>hdfs://cloud6:54310</value>*
>>>>>
>>>>> Thanks
>>>>> On Wed, Aug 7, 2013 at 11:05 PM, Felipe Gutierrez <
>>>>> felipe.o.gutierrez@gmail.com> wrote:
>>>>>
>>>>>> on my slave the process is running:
>>>>>> hduser@cloud15:/usr/local/hadoop$ jps
>>>>>> 19025 DataNode
>>>>>> 19092 Jps
>>>>>>
>>>>>>
>>>>>> On Wed, Aug 7, 2013 at 2:26 PM, Jitendra Yadav <
>>>>>> jeetuyadav200890@gmail.com> wrote:
>>>>>>
>>>>>>> Hi,
>>>>>>>
>>>>>>> Your logs showing that the process is creating IPC call not for
>>>>>>> namenode, it is hitting datanode itself.
>>>>>>>
>>>>>>> Check you please check you datanode processes status?.
>>>>>>>
>>>>>>> Regards
>>>>>>> Jitendra
>>>>>>>
>>>>>>> On Wed, Aug 7, 2013 at 10:29 PM, Felipe Gutierrez <
>>>>>>> felipe.o.gutierrez@gmail.com> wrote:
>>>>>>>
>>>>>>>> Hi everyone,
>>>>>>>>
>>>>>>>> My slave machine (cloud15) the datanode shows this log. It doesn't
>>>>>>>> connect to the master (cloud6).
>>>>>>>>
>>>>>>>> 2013-08-07 13:44:03,110 INFO org.apache.hadoop.ipc.Client:
>>>>>>>> Retrying connect to server: cloud15/192.168.188.15:54310. Already
>>>>>>>> tried 9 time(s); retry policy is
>>>>>>>> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
>>>>>>>> 2013-08-07 13:44:03,110 INFO org.apache.hadoop.ipc.RPC: Server at
>>>>>>>> cloud15/192.168.188.15:54310 not available yet, Zzzzz...
>>>>>>>>
>>>>>>>> But when I type jps command on slave machine DataNode is running.
>>>>>>>> This is my file core-site.xml in slave machine (cloud15):
>>>>>>>> <configuration>
>>>>>>>> <property>
>>>>>>>> <name>hadoop.tmp.dir</name>
>>>>>>>> <value>/app/hadoop/tmp</value>
>>>>>>>> <description>A base for other temporary directories.</description>
>>>>>>>> </property>
>>>>>>>>
>>>>>>>> <property>
>>>>>>>> <name>fs.default.name</name>
>>>>>>>> <value>hdfs://cloud15:54310</value>
>>>>>>>> <description>The name of the default file system. A URI whose
>>>>>>>> scheme and authority determine the FileSystem implementation. The
>>>>>>>> uri's scheme determines the config property (fs.SCHEME.impl)
>>>>>>>> naming
>>>>>>>> the FileSystem implementation class. The uri's authority is used
>>>>>>>> to
>>>>>>>> determine the host, port, etc. for a filesystem.</description>
>>>>>>>> </property>
>>>>>>>> </configuration>
>>>>>>>>
>>>>>>>> In the master machine I just swap cloud15 to cloud6.
>>>>>>>> In the file /etc/host I have (192.168.188.15 cloud15) and
>>>>>>>> (192.168.188.6 cloud6) lines, and both machines access through ssh with
>>>>>>>> out password.
>>>>>>>>
>>>>>>>> Am I missing anything?
>>>>>>>>
>>>>>>>> Thanks in advance!
>>>>>>>> Felipe
>>>>>>>>
>>>>>>>>
>>>>>>>> --
>>>>>>>> *--
>>>>>>>> -- Felipe Oliveira Gutierrez
>>>>>>>> -- Felipe.o.Gutierrez@gmail.com
>>>>>>>> -- https://sites.google.com/site/lipe82/Home/diaadia*
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>>> *--
>>>>>> -- Felipe Oliveira Gutierrez
>>>>>> -- Felipe.o.Gutierrez@gmail.com
>>>>>> -- https://sites.google.com/site/lipe82/Home/diaadia*
>>>>>>
>>>>>
>>>>>
>>>>
>>>
>>>
>>> --
>>> *--
>>> -- Felipe Oliveira Gutierrez
>>> -- Felipe.o.Gutierrez@gmail.com
>>> -- https://sites.google.com/site/lipe82/Home/diaadia*
>>>
>>
>>
>
--
*--
-- Felipe Oliveira Gutierrez
-- Felipe.o.Gutierrez@gmail.com
-- https://sites.google.com/site/lipe82/Home/diaadia*
Re: Datanode doesn't connect to Namenode
Posted by Felipe Gutierrez <fe...@gmail.com>.
Thanks for the hints Shekhar.
My cluster is running well.
Felipe
On Thu, Aug 8, 2013 at 8:56 AM, Shekhar Sharma <sh...@gmail.com>wrote:
> keep the configuration same in the datanodes as well for the time
> being..Only thing that data node or slave machine should know is Masters
> files ( that means who is the master)
> and you need to tell the slave machine where is your namenode running,
> which you need to specify in the property fs.default.name and also you
> need to tell where is your job tracker running which you need to specify by
> the property mapred.job.tracker
>
> Hope you might be able to bring up your cluster now..
>
> if you still face the issues, you can follow my blog
> http://ksssblogs.blogspot.in/2013/07/multi-node-hadoop-cluster-set-using-vms.html
> Regards,
> Som Shekhar Sharma
> +91-8197243810
>
>
> On Thu, Aug 8, 2013 at 5:21 PM, Shekhar Sharma <sh...@gmail.com>wrote:
>
>> if you have removed this property from the slave machines then your DN
>> information will be created under /tmp folder and once you reboot your data
>> node machines, the information will be lost..
>>
>> Sorry i have not seen the logs..but you dont have play around the
>> properties..
>> ...see datanode will not come up in scenario, where it is not able to
>> send the heart beat signal to the name node at port 54310
>>
>>
>>
>>
>> Do step by step :
>>
>> Check whether you can ping every machine and you can do SSH in password
>> less manner
>>
>> Lets say i have one master machine whose hostname is *Master* and i have
>> two slave machines *Slave0* and *Slave1* ( i am assuming the OS used are
>> CentOS)
>>
>> In *master Machine* do the following things:
>>
>> *First disable the firewall by running the following command:
>> *as a root user run the following commands
>> service iptables save
>> service iptables stop
>> chkconfig iptables off
>>
>> specify the following properties in the corresponding files
>>
>> *mapred-site.xml*
>>
>> - mapred.job.tracker (Master:54311)
>>
>> *core-site.xml*
>>
>> - fs.default.name (hdfs://Master:54310)
>> - hadoop.tmp.dir (choose some persistent directory)
>>
>> *hdfs-site.xml*
>>
>> - dfs.replication (3)
>> - dfs.block.size(64MB)
>>
>> *Masters file*
>>
>> - Master
>>
>> *Slaves file*
>>
>> - Slave0
>> - Slave1
>>
>> *hadoop-env.sh*
>>
>> - export JAVA_HOME=<Your java home directory>
>>
>>
>> In *slave0 machine
>> *
>>
>> - Disable the firewall
>> - Same properties as you did in Masters machine
>>
>> In *slave 1 machine*
>>
>> - Disable the firewall
>> - Same properties as you did in Master machine
>>
>>
>> Once you start the cluster by running the command start-all.sh, check the
>> ports 54310 and 54311 got opened by running the command "netstat
>> -tuplen"..it will show whether ports are opened or not
>>
>> Regards,
>> Som Shekhar Sharma
>> +91-8197243810
>>
>>
>> On Thu, Aug 8, 2013 at 4:57 PM, Felipe Gutierrez <
>> felipe.o.gutierrez@gmail.com> wrote:
>>
>>> Thanks,
>>> at all files I changed to master (cloud6) and I take off this property
>>> <name>hadoop.tmp.dir</name>.
>>>
>>> Felipe
>>>
>>>
>>> On Wed, Aug 7, 2013 at 3:20 PM, Shekhar Sharma <sh...@gmail.com>wrote:
>>>
>>>> Disable the firewall on data node and namenode machines..
>>>> Regards,
>>>> Som Shekhar Sharma
>>>> +91-8197243810
>>>>
>>>>
>>>> On Wed, Aug 7, 2013 at 11:33 PM, Jitendra Yadav <
>>>> jeetuyadav200890@gmail.com> wrote:
>>>>
>>>>> Your hdfs name entry should be same on master and databnodes
>>>>>
>>>>> * <name>fs.default.name</name>*
>>>>> *<value>hdfs://cloud6:54310</value>*
>>>>>
>>>>> Thanks
>>>>> On Wed, Aug 7, 2013 at 11:05 PM, Felipe Gutierrez <
>>>>> felipe.o.gutierrez@gmail.com> wrote:
>>>>>
>>>>>> on my slave the process is running:
>>>>>> hduser@cloud15:/usr/local/hadoop$ jps
>>>>>> 19025 DataNode
>>>>>> 19092 Jps
>>>>>>
>>>>>>
>>>>>> On Wed, Aug 7, 2013 at 2:26 PM, Jitendra Yadav <
>>>>>> jeetuyadav200890@gmail.com> wrote:
>>>>>>
>>>>>>> Hi,
>>>>>>>
>>>>>>> Your logs showing that the process is creating IPC call not for
>>>>>>> namenode, it is hitting datanode itself.
>>>>>>>
>>>>>>> Check you please check you datanode processes status?.
>>>>>>>
>>>>>>> Regards
>>>>>>> Jitendra
>>>>>>>
>>>>>>> On Wed, Aug 7, 2013 at 10:29 PM, Felipe Gutierrez <
>>>>>>> felipe.o.gutierrez@gmail.com> wrote:
>>>>>>>
>>>>>>>> Hi everyone,
>>>>>>>>
>>>>>>>> My slave machine (cloud15) the datanode shows this log. It doesn't
>>>>>>>> connect to the master (cloud6).
>>>>>>>>
>>>>>>>> 2013-08-07 13:44:03,110 INFO org.apache.hadoop.ipc.Client:
>>>>>>>> Retrying connect to server: cloud15/192.168.188.15:54310. Already
>>>>>>>> tried 9 time(s); retry policy is
>>>>>>>> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
>>>>>>>> 2013-08-07 13:44:03,110 INFO org.apache.hadoop.ipc.RPC: Server at
>>>>>>>> cloud15/192.168.188.15:54310 not available yet, Zzzzz...
>>>>>>>>
>>>>>>>> But when I type jps command on slave machine DataNode is running.
>>>>>>>> This is my file core-site.xml in slave machine (cloud15):
>>>>>>>> <configuration>
>>>>>>>> <property>
>>>>>>>> <name>hadoop.tmp.dir</name>
>>>>>>>> <value>/app/hadoop/tmp</value>
>>>>>>>> <description>A base for other temporary directories.</description>
>>>>>>>> </property>
>>>>>>>>
>>>>>>>> <property>
>>>>>>>> <name>fs.default.name</name>
>>>>>>>> <value>hdfs://cloud15:54310</value>
>>>>>>>> <description>The name of the default file system. A URI whose
>>>>>>>> scheme and authority determine the FileSystem implementation. The
>>>>>>>> uri's scheme determines the config property (fs.SCHEME.impl)
>>>>>>>> naming
>>>>>>>> the FileSystem implementation class. The uri's authority is used
>>>>>>>> to
>>>>>>>> determine the host, port, etc. for a filesystem.</description>
>>>>>>>> </property>
>>>>>>>> </configuration>
>>>>>>>>
>>>>>>>> In the master machine I just swap cloud15 to cloud6.
>>>>>>>> In the file /etc/host I have (192.168.188.15 cloud15) and
>>>>>>>> (192.168.188.6 cloud6) lines, and both machines access through ssh with
>>>>>>>> out password.
>>>>>>>>
>>>>>>>> Am I missing anything?
>>>>>>>>
>>>>>>>> Thanks in advance!
>>>>>>>> Felipe
>>>>>>>>
>>>>>>>>
>>>>>>>> --
>>>>>>>> *--
>>>>>>>> -- Felipe Oliveira Gutierrez
>>>>>>>> -- Felipe.o.Gutierrez@gmail.com
>>>>>>>> -- https://sites.google.com/site/lipe82/Home/diaadia*
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>>> *--
>>>>>> -- Felipe Oliveira Gutierrez
>>>>>> -- Felipe.o.Gutierrez@gmail.com
>>>>>> -- https://sites.google.com/site/lipe82/Home/diaadia*
>>>>>>
>>>>>
>>>>>
>>>>
>>>
>>>
>>> --
>>> *--
>>> -- Felipe Oliveira Gutierrez
>>> -- Felipe.o.Gutierrez@gmail.com
>>> -- https://sites.google.com/site/lipe82/Home/diaadia*
>>>
>>
>>
>
--
*--
-- Felipe Oliveira Gutierrez
-- Felipe.o.Gutierrez@gmail.com
-- https://sites.google.com/site/lipe82/Home/diaadia*
Re: Datanode doesn't connect to Namenode
Posted by Shekhar Sharma <sh...@gmail.com>.
keep the configuration same in the datanodes as well for the time
being..Only thing that data node or slave machine should know is Masters
files ( that means who is the master)
and you need to tell the slave machine where is your namenode running,
which you need to specify in the property fs.default.name and also you need
to tell where is your job tracker running which you need to specify by the
property mapred.job.tracker
Hope you might be able to bring up your cluster now..
if you still face the issues, you can follow my blog
http://ksssblogs.blogspot.in/2013/07/multi-node-hadoop-cluster-set-using-vms.html
Regards,
Som Shekhar Sharma
+91-8197243810
On Thu, Aug 8, 2013 at 5:21 PM, Shekhar Sharma <sh...@gmail.com>wrote:
> if you have removed this property from the slave machines then your DN
> information will be created under /tmp folder and once you reboot your data
> node machines, the information will be lost..
>
> Sorry i have not seen the logs..but you dont have play around the
> properties..
> ...see datanode will not come up in scenario, where it is not able to send
> the heart beat signal to the name node at port 54310
>
>
>
>
> Do step by step :
>
> Check whether you can ping every machine and you can do SSH in password
> less manner
>
> Lets say i have one master machine whose hostname is *Master* and i have
> two slave machines *Slave0* and *Slave1* ( i am assuming the OS used are
> CentOS)
>
> In *master Machine* do the following things:
>
> *First disable the firewall by running the following command:
> *as a root user run the following commands
> service iptables save
> service iptables stop
> chkconfig iptables off
>
> specify the following properties in the corresponding files
>
> *mapred-site.xml*
>
> - mapred.job.tracker (Master:54311)
>
> *core-site.xml*
>
> - fs.default.name (hdfs://Master:54310)
> - hadoop.tmp.dir (choose some persistent directory)
>
> *hdfs-site.xml*
>
> - dfs.replication (3)
> - dfs.block.size(64MB)
>
> *Masters file*
>
> - Master
>
> *Slaves file*
>
> - Slave0
> - Slave1
>
> *hadoop-env.sh*
>
> - export JAVA_HOME=<Your java home directory>
>
>
> In *slave0 machine
> *
>
> - Disable the firewall
> - Same properties as you did in Masters machine
>
> In *slave 1 machine*
>
> - Disable the firewall
> - Same properties as you did in Master machine
>
>
> Once you start the cluster by running the command start-all.sh, check the
> ports 54310 and 54311 got opened by running the command "netstat
> -tuplen"..it will show whether ports are opened or not
>
> Regards,
> Som Shekhar Sharma
> +91-8197243810
>
>
> On Thu, Aug 8, 2013 at 4:57 PM, Felipe Gutierrez <
> felipe.o.gutierrez@gmail.com> wrote:
>
>> Thanks,
>> at all files I changed to master (cloud6) and I take off this property
>> <name>hadoop.tmp.dir</name>.
>>
>> Felipe
>>
>>
>> On Wed, Aug 7, 2013 at 3:20 PM, Shekhar Sharma <sh...@gmail.com>wrote:
>>
>>> Disable the firewall on data node and namenode machines..
>>> Regards,
>>> Som Shekhar Sharma
>>> +91-8197243810
>>>
>>>
>>> On Wed, Aug 7, 2013 at 11:33 PM, Jitendra Yadav <
>>> jeetuyadav200890@gmail.com> wrote:
>>>
>>>> Your hdfs name entry should be same on master and databnodes
>>>>
>>>> * <name>fs.default.name</name>*
>>>> *<value>hdfs://cloud6:54310</value>*
>>>>
>>>> Thanks
>>>> On Wed, Aug 7, 2013 at 11:05 PM, Felipe Gutierrez <
>>>> felipe.o.gutierrez@gmail.com> wrote:
>>>>
>>>>> on my slave the process is running:
>>>>> hduser@cloud15:/usr/local/hadoop$ jps
>>>>> 19025 DataNode
>>>>> 19092 Jps
>>>>>
>>>>>
>>>>> On Wed, Aug 7, 2013 at 2:26 PM, Jitendra Yadav <
>>>>> jeetuyadav200890@gmail.com> wrote:
>>>>>
>>>>>> Hi,
>>>>>>
>>>>>> Your logs showing that the process is creating IPC call not for
>>>>>> namenode, it is hitting datanode itself.
>>>>>>
>>>>>> Check you please check you datanode processes status?.
>>>>>>
>>>>>> Regards
>>>>>> Jitendra
>>>>>>
>>>>>> On Wed, Aug 7, 2013 at 10:29 PM, Felipe Gutierrez <
>>>>>> felipe.o.gutierrez@gmail.com> wrote:
>>>>>>
>>>>>>> Hi everyone,
>>>>>>>
>>>>>>> My slave machine (cloud15) the datanode shows this log. It doesn't
>>>>>>> connect to the master (cloud6).
>>>>>>>
>>>>>>> 2013-08-07 13:44:03,110 INFO org.apache.hadoop.ipc.Client:
>>>>>>> Retrying connect to server: cloud15/192.168.188.15:54310. Already
>>>>>>> tried 9 time(s); retry policy is
>>>>>>> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
>>>>>>> 2013-08-07 13:44:03,110 INFO org.apache.hadoop.ipc.RPC: Server at
>>>>>>> cloud15/192.168.188.15:54310 not available yet, Zzzzz...
>>>>>>>
>>>>>>> But when I type jps command on slave machine DataNode is running.
>>>>>>> This is my file core-site.xml in slave machine (cloud15):
>>>>>>> <configuration>
>>>>>>> <property>
>>>>>>> <name>hadoop.tmp.dir</name>
>>>>>>> <value>/app/hadoop/tmp</value>
>>>>>>> <description>A base for other temporary directories.</description>
>>>>>>> </property>
>>>>>>>
>>>>>>> <property>
>>>>>>> <name>fs.default.name</name>
>>>>>>> <value>hdfs://cloud15:54310</value>
>>>>>>> <description>The name of the default file system. A URI whose
>>>>>>> scheme and authority determine the FileSystem implementation. The
>>>>>>> uri's scheme determines the config property (fs.SCHEME.impl) naming
>>>>>>> the FileSystem implementation class. The uri's authority is used
>>>>>>> to
>>>>>>> determine the host, port, etc. for a filesystem.</description>
>>>>>>> </property>
>>>>>>> </configuration>
>>>>>>>
>>>>>>> In the master machine I just swap cloud15 to cloud6.
>>>>>>> In the file /etc/host I have (192.168.188.15 cloud15) and
>>>>>>> (192.168.188.6 cloud6) lines, and both machines access through ssh with
>>>>>>> out password.
>>>>>>>
>>>>>>> Am I missing anything?
>>>>>>>
>>>>>>> Thanks in advance!
>>>>>>> Felipe
>>>>>>>
>>>>>>>
>>>>>>> --
>>>>>>> *--
>>>>>>> -- Felipe Oliveira Gutierrez
>>>>>>> -- Felipe.o.Gutierrez@gmail.com
>>>>>>> -- https://sites.google.com/site/lipe82/Home/diaadia*
>>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> *--
>>>>> -- Felipe Oliveira Gutierrez
>>>>> -- Felipe.o.Gutierrez@gmail.com
>>>>> -- https://sites.google.com/site/lipe82/Home/diaadia*
>>>>>
>>>>
>>>>
>>>
>>
>>
>> --
>> *--
>> -- Felipe Oliveira Gutierrez
>> -- Felipe.o.Gutierrez@gmail.com
>> -- https://sites.google.com/site/lipe82/Home/diaadia*
>>
>
>
Re: Datanode doesn't connect to Namenode
Posted by Shekhar Sharma <sh...@gmail.com>.
keep the configuration same in the datanodes as well for the time
being..Only thing that data node or slave machine should know is Masters
files ( that means who is the master)
and you need to tell the slave machine where is your namenode running,
which you need to specify in the property fs.default.name and also you need
to tell where is your job tracker running which you need to specify by the
property mapred.job.tracker
Hope you might be able to bring up your cluster now..
if you still face the issues, you can follow my blog
http://ksssblogs.blogspot.in/2013/07/multi-node-hadoop-cluster-set-using-vms.html
Regards,
Som Shekhar Sharma
+91-8197243810
On Thu, Aug 8, 2013 at 5:21 PM, Shekhar Sharma <sh...@gmail.com>wrote:
> if you have removed this property from the slave machines then your DN
> information will be created under /tmp folder and once you reboot your data
> node machines, the information will be lost..
>
> Sorry i have not seen the logs..but you dont have play around the
> properties..
> ...see datanode will not come up in scenario, where it is not able to send
> the heart beat signal to the name node at port 54310
>
>
>
>
> Do step by step :
>
> Check whether you can ping every machine and you can do SSH in password
> less manner
>
> Lets say i have one master machine whose hostname is *Master* and i have
> two slave machines *Slave0* and *Slave1* ( i am assuming the OS used are
> CentOS)
>
> In *master Machine* do the following things:
>
> *First disable the firewall by running the following command:
> *as a root user run the following commands
> service iptables save
> service iptables stop
> chkconfig iptables off
>
> specify the following properties in the corresponding files
>
> *mapred-site.xml*
>
> - mapred.job.tracker (Master:54311)
>
> *core-site.xml*
>
> - fs.default.name (hdfs://Master:54310)
> - hadoop.tmp.dir (choose some persistent directory)
>
> *hdfs-site.xml*
>
> - dfs.replication (3)
> - dfs.block.size(64MB)
>
> *Masters file*
>
> - Master
>
> *Slaves file*
>
> - Slave0
> - Slave1
>
> *hadoop-env.sh*
>
> - export JAVA_HOME=<Your java home directory>
>
>
> In *slave0 machine
> *
>
> - Disable the firewall
> - Same properties as you did in Masters machine
>
> In *slave 1 machine*
>
> - Disable the firewall
> - Same properties as you did in Master machine
>
>
> Once you start the cluster by running the command start-all.sh, check the
> ports 54310 and 54311 got opened by running the command "netstat
> -tuplen"..it will show whether ports are opened or not
>
> Regards,
> Som Shekhar Sharma
> +91-8197243810
>
>
> On Thu, Aug 8, 2013 at 4:57 PM, Felipe Gutierrez <
> felipe.o.gutierrez@gmail.com> wrote:
>
>> Thanks,
>> at all files I changed to master (cloud6) and I take off this property
>> <name>hadoop.tmp.dir</name>.
>>
>> Felipe
>>
>>
>> On Wed, Aug 7, 2013 at 3:20 PM, Shekhar Sharma <sh...@gmail.com>wrote:
>>
>>> Disable the firewall on data node and namenode machines..
>>> Regards,
>>> Som Shekhar Sharma
>>> +91-8197243810
>>>
>>>
>>> On Wed, Aug 7, 2013 at 11:33 PM, Jitendra Yadav <
>>> jeetuyadav200890@gmail.com> wrote:
>>>
>>>> Your hdfs name entry should be same on master and databnodes
>>>>
>>>> * <name>fs.default.name</name>*
>>>> *<value>hdfs://cloud6:54310</value>*
>>>>
>>>> Thanks
>>>> On Wed, Aug 7, 2013 at 11:05 PM, Felipe Gutierrez <
>>>> felipe.o.gutierrez@gmail.com> wrote:
>>>>
>>>>> on my slave the process is running:
>>>>> hduser@cloud15:/usr/local/hadoop$ jps
>>>>> 19025 DataNode
>>>>> 19092 Jps
>>>>>
>>>>>
>>>>> On Wed, Aug 7, 2013 at 2:26 PM, Jitendra Yadav <
>>>>> jeetuyadav200890@gmail.com> wrote:
>>>>>
>>>>>> Hi,
>>>>>>
>>>>>> Your logs showing that the process is creating IPC call not for
>>>>>> namenode, it is hitting datanode itself.
>>>>>>
>>>>>> Check you please check you datanode processes status?.
>>>>>>
>>>>>> Regards
>>>>>> Jitendra
>>>>>>
>>>>>> On Wed, Aug 7, 2013 at 10:29 PM, Felipe Gutierrez <
>>>>>> felipe.o.gutierrez@gmail.com> wrote:
>>>>>>
>>>>>>> Hi everyone,
>>>>>>>
>>>>>>> My slave machine (cloud15) the datanode shows this log. It doesn't
>>>>>>> connect to the master (cloud6).
>>>>>>>
>>>>>>> 2013-08-07 13:44:03,110 INFO org.apache.hadoop.ipc.Client:
>>>>>>> Retrying connect to server: cloud15/192.168.188.15:54310. Already
>>>>>>> tried 9 time(s); retry policy is
>>>>>>> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
>>>>>>> 2013-08-07 13:44:03,110 INFO org.apache.hadoop.ipc.RPC: Server at
>>>>>>> cloud15/192.168.188.15:54310 not available yet, Zzzzz...
>>>>>>>
>>>>>>> But when I type jps command on slave machine DataNode is running.
>>>>>>> This is my file core-site.xml in slave machine (cloud15):
>>>>>>> <configuration>
>>>>>>> <property>
>>>>>>> <name>hadoop.tmp.dir</name>
>>>>>>> <value>/app/hadoop/tmp</value>
>>>>>>> <description>A base for other temporary directories.</description>
>>>>>>> </property>
>>>>>>>
>>>>>>> <property>
>>>>>>> <name>fs.default.name</name>
>>>>>>> <value>hdfs://cloud15:54310</value>
>>>>>>> <description>The name of the default file system. A URI whose
>>>>>>> scheme and authority determine the FileSystem implementation. The
>>>>>>> uri's scheme determines the config property (fs.SCHEME.impl) naming
>>>>>>> the FileSystem implementation class. The uri's authority is used
>>>>>>> to
>>>>>>> determine the host, port, etc. for a filesystem.</description>
>>>>>>> </property>
>>>>>>> </configuration>
>>>>>>>
>>>>>>> In the master machine I just swap cloud15 to cloud6.
>>>>>>> In the file /etc/host I have (192.168.188.15 cloud15) and
>>>>>>> (192.168.188.6 cloud6) lines, and both machines access through ssh with
>>>>>>> out password.
>>>>>>>
>>>>>>> Am I missing anything?
>>>>>>>
>>>>>>> Thanks in advance!
>>>>>>> Felipe
>>>>>>>
>>>>>>>
>>>>>>> --
>>>>>>> *--
>>>>>>> -- Felipe Oliveira Gutierrez
>>>>>>> -- Felipe.o.Gutierrez@gmail.com
>>>>>>> -- https://sites.google.com/site/lipe82/Home/diaadia*
>>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> *--
>>>>> -- Felipe Oliveira Gutierrez
>>>>> -- Felipe.o.Gutierrez@gmail.com
>>>>> -- https://sites.google.com/site/lipe82/Home/diaadia*
>>>>>
>>>>
>>>>
>>>
>>
>>
>> --
>> *--
>> -- Felipe Oliveira Gutierrez
>> -- Felipe.o.Gutierrez@gmail.com
>> -- https://sites.google.com/site/lipe82/Home/diaadia*
>>
>
>
Re: Datanode doesn't connect to Namenode
Posted by Shekhar Sharma <sh...@gmail.com>.
keep the configuration same in the datanodes as well for the time
being..Only thing that data node or slave machine should know is Masters
files ( that means who is the master)
and you need to tell the slave machine where is your namenode running,
which you need to specify in the property fs.default.name and also you need
to tell where is your job tracker running which you need to specify by the
property mapred.job.tracker
Hope you might be able to bring up your cluster now..
if you still face the issues, you can follow my blog
http://ksssblogs.blogspot.in/2013/07/multi-node-hadoop-cluster-set-using-vms.html
Regards,
Som Shekhar Sharma
+91-8197243810
On Thu, Aug 8, 2013 at 5:21 PM, Shekhar Sharma <sh...@gmail.com>wrote:
> if you have removed this property from the slave machines then your DN
> information will be created under /tmp folder and once you reboot your data
> node machines, the information will be lost..
>
> Sorry i have not seen the logs..but you dont have play around the
> properties..
> ...see datanode will not come up in scenario, where it is not able to send
> the heart beat signal to the name node at port 54310
>
>
>
>
> Do step by step :
>
> Check whether you can ping every machine and you can do SSH in password
> less manner
>
> Lets say i have one master machine whose hostname is *Master* and i have
> two slave machines *Slave0* and *Slave1* ( i am assuming the OS used are
> CentOS)
>
> In *master Machine* do the following things:
>
> *First disable the firewall by running the following command:
> *as a root user run the following commands
> service iptables save
> service iptables stop
> chkconfig iptables off
>
> specify the following properties in the corresponding files
>
> *mapred-site.xml*
>
> - mapred.job.tracker (Master:54311)
>
> *core-site.xml*
>
> - fs.default.name (hdfs://Master:54310)
> - hadoop.tmp.dir (choose some persistent directory)
>
> *hdfs-site.xml*
>
> - dfs.replication (3)
> - dfs.block.size(64MB)
>
> *Masters file*
>
> - Master
>
> *Slaves file*
>
> - Slave0
> - Slave1
>
> *hadoop-env.sh*
>
> - export JAVA_HOME=<Your java home directory>
>
>
> In *slave0 machine
> *
>
> - Disable the firewall
> - Same properties as you did in Masters machine
>
> In *slave 1 machine*
>
> - Disable the firewall
> - Same properties as you did in Master machine
>
>
> Once you start the cluster by running the command start-all.sh, check the
> ports 54310 and 54311 got opened by running the command "netstat
> -tuplen"..it will show whether ports are opened or not
>
> Regards,
> Som Shekhar Sharma
> +91-8197243810
>
>
> On Thu, Aug 8, 2013 at 4:57 PM, Felipe Gutierrez <
> felipe.o.gutierrez@gmail.com> wrote:
>
>> Thanks,
>> at all files I changed to master (cloud6) and I take off this property
>> <name>hadoop.tmp.dir</name>.
>>
>> Felipe
>>
>>
>> On Wed, Aug 7, 2013 at 3:20 PM, Shekhar Sharma <sh...@gmail.com>wrote:
>>
>>> Disable the firewall on data node and namenode machines..
>>> Regards,
>>> Som Shekhar Sharma
>>> +91-8197243810
>>>
>>>
>>> On Wed, Aug 7, 2013 at 11:33 PM, Jitendra Yadav <
>>> jeetuyadav200890@gmail.com> wrote:
>>>
>>>> Your hdfs name entry should be same on master and databnodes
>>>>
>>>> * <name>fs.default.name</name>*
>>>> *<value>hdfs://cloud6:54310</value>*
>>>>
>>>> Thanks
>>>> On Wed, Aug 7, 2013 at 11:05 PM, Felipe Gutierrez <
>>>> felipe.o.gutierrez@gmail.com> wrote:
>>>>
>>>>> on my slave the process is running:
>>>>> hduser@cloud15:/usr/local/hadoop$ jps
>>>>> 19025 DataNode
>>>>> 19092 Jps
>>>>>
>>>>>
>>>>> On Wed, Aug 7, 2013 at 2:26 PM, Jitendra Yadav <
>>>>> jeetuyadav200890@gmail.com> wrote:
>>>>>
>>>>>> Hi,
>>>>>>
>>>>>> Your logs showing that the process is creating IPC call not for
>>>>>> namenode, it is hitting datanode itself.
>>>>>>
>>>>>> Check you please check you datanode processes status?.
>>>>>>
>>>>>> Regards
>>>>>> Jitendra
>>>>>>
>>>>>> On Wed, Aug 7, 2013 at 10:29 PM, Felipe Gutierrez <
>>>>>> felipe.o.gutierrez@gmail.com> wrote:
>>>>>>
>>>>>>> Hi everyone,
>>>>>>>
>>>>>>> My slave machine (cloud15) the datanode shows this log. It doesn't
>>>>>>> connect to the master (cloud6).
>>>>>>>
>>>>>>> 2013-08-07 13:44:03,110 INFO org.apache.hadoop.ipc.Client:
>>>>>>> Retrying connect to server: cloud15/192.168.188.15:54310. Already
>>>>>>> tried 9 time(s); retry policy is
>>>>>>> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
>>>>>>> 2013-08-07 13:44:03,110 INFO org.apache.hadoop.ipc.RPC: Server at
>>>>>>> cloud15/192.168.188.15:54310 not available yet, Zzzzz...
>>>>>>>
>>>>>>> But when I type jps command on slave machine DataNode is running.
>>>>>>> This is my file core-site.xml in slave machine (cloud15):
>>>>>>> <configuration>
>>>>>>> <property>
>>>>>>> <name>hadoop.tmp.dir</name>
>>>>>>> <value>/app/hadoop/tmp</value>
>>>>>>> <description>A base for other temporary directories.</description>
>>>>>>> </property>
>>>>>>>
>>>>>>> <property>
>>>>>>> <name>fs.default.name</name>
>>>>>>> <value>hdfs://cloud15:54310</value>
>>>>>>> <description>The name of the default file system. A URI whose
>>>>>>> scheme and authority determine the FileSystem implementation. The
>>>>>>> uri's scheme determines the config property (fs.SCHEME.impl) naming
>>>>>>> the FileSystem implementation class. The uri's authority is used
>>>>>>> to
>>>>>>> determine the host, port, etc. for a filesystem.</description>
>>>>>>> </property>
>>>>>>> </configuration>
>>>>>>>
>>>>>>> In the master machine I just swap cloud15 to cloud6.
>>>>>>> In the file /etc/host I have (192.168.188.15 cloud15) and
>>>>>>> (192.168.188.6 cloud6) lines, and both machines access through ssh with
>>>>>>> out password.
>>>>>>>
>>>>>>> Am I missing anything?
>>>>>>>
>>>>>>> Thanks in advance!
>>>>>>> Felipe
>>>>>>>
>>>>>>>
>>>>>>> --
>>>>>>> *--
>>>>>>> -- Felipe Oliveira Gutierrez
>>>>>>> -- Felipe.o.Gutierrez@gmail.com
>>>>>>> -- https://sites.google.com/site/lipe82/Home/diaadia*
>>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> *--
>>>>> -- Felipe Oliveira Gutierrez
>>>>> -- Felipe.o.Gutierrez@gmail.com
>>>>> -- https://sites.google.com/site/lipe82/Home/diaadia*
>>>>>
>>>>
>>>>
>>>
>>
>>
>> --
>> *--
>> -- Felipe Oliveira Gutierrez
>> -- Felipe.o.Gutierrez@gmail.com
>> -- https://sites.google.com/site/lipe82/Home/diaadia*
>>
>
>
Re: Datanode doesn't connect to Namenode
Posted by Shekhar Sharma <sh...@gmail.com>.
keep the configuration same in the datanodes as well for the time
being..Only thing that data node or slave machine should know is Masters
files ( that means who is the master)
and you need to tell the slave machine where is your namenode running,
which you need to specify in the property fs.default.name and also you need
to tell where is your job tracker running which you need to specify by the
property mapred.job.tracker
Hope you might be able to bring up your cluster now..
if you still face the issues, you can follow my blog
http://ksssblogs.blogspot.in/2013/07/multi-node-hadoop-cluster-set-using-vms.html
Regards,
Som Shekhar Sharma
+91-8197243810
On Thu, Aug 8, 2013 at 5:21 PM, Shekhar Sharma <sh...@gmail.com>wrote:
> if you have removed this property from the slave machines then your DN
> information will be created under /tmp folder and once you reboot your data
> node machines, the information will be lost..
>
> Sorry i have not seen the logs..but you dont have play around the
> properties..
> ...see datanode will not come up in scenario, where it is not able to send
> the heart beat signal to the name node at port 54310
>
>
>
>
> Do step by step :
>
> Check whether you can ping every machine and you can do SSH in password
> less manner
>
> Lets say i have one master machine whose hostname is *Master* and i have
> two slave machines *Slave0* and *Slave1* ( i am assuming the OS used are
> CentOS)
>
> In *master Machine* do the following things:
>
> *First disable the firewall by running the following command:
> *as a root user run the following commands
> service iptables save
> service iptables stop
> chkconfig iptables off
>
> specify the following properties in the corresponding files
>
> *mapred-site.xml*
>
> - mapred.job.tracker (Master:54311)
>
> *core-site.xml*
>
> - fs.default.name (hdfs://Master:54310)
> - hadoop.tmp.dir (choose some persistent directory)
>
> *hdfs-site.xml*
>
> - dfs.replication (3)
> - dfs.block.size(64MB)
>
> *Masters file*
>
> - Master
>
> *Slaves file*
>
> - Slave0
> - Slave1
>
> *hadoop-env.sh*
>
> - export JAVA_HOME=<Your java home directory>
>
>
> In *slave0 machine
> *
>
> - Disable the firewall
> - Same properties as you did in Masters machine
>
> In *slave 1 machine*
>
> - Disable the firewall
> - Same properties as you did in Master machine
>
>
> Once you start the cluster by running the command start-all.sh, check the
> ports 54310 and 54311 got opened by running the command "netstat
> -tuplen"..it will show whether ports are opened or not
>
> Regards,
> Som Shekhar Sharma
> +91-8197243810
>
>
> On Thu, Aug 8, 2013 at 4:57 PM, Felipe Gutierrez <
> felipe.o.gutierrez@gmail.com> wrote:
>
>> Thanks,
>> at all files I changed to master (cloud6) and I take off this property
>> <name>hadoop.tmp.dir</name>.
>>
>> Felipe
>>
>>
>> On Wed, Aug 7, 2013 at 3:20 PM, Shekhar Sharma <sh...@gmail.com>wrote:
>>
>>> Disable the firewall on data node and namenode machines..
>>> Regards,
>>> Som Shekhar Sharma
>>> +91-8197243810
>>>
>>>
>>> On Wed, Aug 7, 2013 at 11:33 PM, Jitendra Yadav <
>>> jeetuyadav200890@gmail.com> wrote:
>>>
>>>> Your hdfs name entry should be same on master and databnodes
>>>>
>>>> * <name>fs.default.name</name>*
>>>> *<value>hdfs://cloud6:54310</value>*
>>>>
>>>> Thanks
>>>> On Wed, Aug 7, 2013 at 11:05 PM, Felipe Gutierrez <
>>>> felipe.o.gutierrez@gmail.com> wrote:
>>>>
>>>>> on my slave the process is running:
>>>>> hduser@cloud15:/usr/local/hadoop$ jps
>>>>> 19025 DataNode
>>>>> 19092 Jps
>>>>>
>>>>>
>>>>> On Wed, Aug 7, 2013 at 2:26 PM, Jitendra Yadav <
>>>>> jeetuyadav200890@gmail.com> wrote:
>>>>>
>>>>>> Hi,
>>>>>>
>>>>>> Your logs showing that the process is creating IPC call not for
>>>>>> namenode, it is hitting datanode itself.
>>>>>>
>>>>>> Check you please check you datanode processes status?.
>>>>>>
>>>>>> Regards
>>>>>> Jitendra
>>>>>>
>>>>>> On Wed, Aug 7, 2013 at 10:29 PM, Felipe Gutierrez <
>>>>>> felipe.o.gutierrez@gmail.com> wrote:
>>>>>>
>>>>>>> Hi everyone,
>>>>>>>
>>>>>>> My slave machine (cloud15) the datanode shows this log. It doesn't
>>>>>>> connect to the master (cloud6).
>>>>>>>
>>>>>>> 2013-08-07 13:44:03,110 INFO org.apache.hadoop.ipc.Client:
>>>>>>> Retrying connect to server: cloud15/192.168.188.15:54310. Already
>>>>>>> tried 9 time(s); retry policy is
>>>>>>> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
>>>>>>> 2013-08-07 13:44:03,110 INFO org.apache.hadoop.ipc.RPC: Server at
>>>>>>> cloud15/192.168.188.15:54310 not available yet, Zzzzz...
>>>>>>>
>>>>>>> But when I type jps command on slave machine DataNode is running.
>>>>>>> This is my file core-site.xml in slave machine (cloud15):
>>>>>>> <configuration>
>>>>>>> <property>
>>>>>>> <name>hadoop.tmp.dir</name>
>>>>>>> <value>/app/hadoop/tmp</value>
>>>>>>> <description>A base for other temporary directories.</description>
>>>>>>> </property>
>>>>>>>
>>>>>>> <property>
>>>>>>> <name>fs.default.name</name>
>>>>>>> <value>hdfs://cloud15:54310</value>
>>>>>>> <description>The name of the default file system. A URI whose
>>>>>>> scheme and authority determine the FileSystem implementation. The
>>>>>>> uri's scheme determines the config property (fs.SCHEME.impl) naming
>>>>>>> the FileSystem implementation class. The uri's authority is used
>>>>>>> to
>>>>>>> determine the host, port, etc. for a filesystem.</description>
>>>>>>> </property>
>>>>>>> </configuration>
>>>>>>>
>>>>>>> In the master machine I just swap cloud15 to cloud6.
>>>>>>> In the file /etc/host I have (192.168.188.15 cloud15) and
>>>>>>> (192.168.188.6 cloud6) lines, and both machines access through ssh with
>>>>>>> out password.
>>>>>>>
>>>>>>> Am I missing anything?
>>>>>>>
>>>>>>> Thanks in advance!
>>>>>>> Felipe
>>>>>>>
>>>>>>>
>>>>>>> --
>>>>>>> *--
>>>>>>> -- Felipe Oliveira Gutierrez
>>>>>>> -- Felipe.o.Gutierrez@gmail.com
>>>>>>> -- https://sites.google.com/site/lipe82/Home/diaadia*
>>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> *--
>>>>> -- Felipe Oliveira Gutierrez
>>>>> -- Felipe.o.Gutierrez@gmail.com
>>>>> -- https://sites.google.com/site/lipe82/Home/diaadia*
>>>>>
>>>>
>>>>
>>>
>>
>>
>> --
>> *--
>> -- Felipe Oliveira Gutierrez
>> -- Felipe.o.Gutierrez@gmail.com
>> -- https://sites.google.com/site/lipe82/Home/diaadia*
>>
>
>
Re: Datanode doesn't connect to Namenode
Posted by Shekhar Sharma <sh...@gmail.com>.
if you have removed this property from the slave machines then your DN
information will be created under /tmp folder and once you reboot your data
node machines, the information will be lost..
Sorry i have not seen the logs..but you dont have play around the
properties..
...see datanode will not come up in scenario, where it is not able to send
the heart beat signal to the name node at port 54310
Do step by step :
Check whether you can ping every machine and you can do SSH in password
less manner
Lets say i have one master machine whose hostname is *Master* and i have
two slave machines *Slave0* and *Slave1* ( i am assuming the OS used are
CentOS)
In *master Machine* do the following things:
*First disable the firewall by running the following command:
*as a root user run the following commands
service iptables save
service iptables stop
chkconfig iptables off
specify the following properties in the corresponding files
*mapred-site.xml*
- mapred.job.tracker (Master:54311)
*core-site.xml*
- fs.default.name (hdfs://Master:54310)
- hadoop.tmp.dir (choose some persistent directory)
*hdfs-site.xml*
- dfs.replication (3)
- dfs.block.size(64MB)
*Masters file*
- Master
*Slaves file*
- Slave0
- Slave1
*hadoop-env.sh*
- export JAVA_HOME=<Your java home directory>
In *slave0 machine
*
- Disable the firewall
- Same properties as you did in Masters machine
In *slave 1 machine*
- Disable the firewall
- Same properties as you did in Master machine
Once you start the cluster by running the command start-all.sh, check the
ports 54310 and 54311 got opened by running the command "netstat
-tuplen"..it will show whether ports are opened or not
Regards,
Som Shekhar Sharma
+91-8197243810
On Thu, Aug 8, 2013 at 4:57 PM, Felipe Gutierrez <
felipe.o.gutierrez@gmail.com> wrote:
> Thanks,
> at all files I changed to master (cloud6) and I take off this property
> <name>hadoop.tmp.dir</name>.
>
> Felipe
>
>
> On Wed, Aug 7, 2013 at 3:20 PM, Shekhar Sharma <sh...@gmail.com>wrote:
>
>> Disable the firewall on data node and namenode machines..
>> Regards,
>> Som Shekhar Sharma
>> +91-8197243810
>>
>>
>> On Wed, Aug 7, 2013 at 11:33 PM, Jitendra Yadav <
>> jeetuyadav200890@gmail.com> wrote:
>>
>>> Your hdfs name entry should be same on master and databnodes
>>>
>>> * <name>fs.default.name</name>*
>>> *<value>hdfs://cloud6:54310</value>*
>>>
>>> Thanks
>>> On Wed, Aug 7, 2013 at 11:05 PM, Felipe Gutierrez <
>>> felipe.o.gutierrez@gmail.com> wrote:
>>>
>>>> on my slave the process is running:
>>>> hduser@cloud15:/usr/local/hadoop$ jps
>>>> 19025 DataNode
>>>> 19092 Jps
>>>>
>>>>
>>>> On Wed, Aug 7, 2013 at 2:26 PM, Jitendra Yadav <
>>>> jeetuyadav200890@gmail.com> wrote:
>>>>
>>>>> Hi,
>>>>>
>>>>> Your logs showing that the process is creating IPC call not for
>>>>> namenode, it is hitting datanode itself.
>>>>>
>>>>> Check you please check you datanode processes status?.
>>>>>
>>>>> Regards
>>>>> Jitendra
>>>>>
>>>>> On Wed, Aug 7, 2013 at 10:29 PM, Felipe Gutierrez <
>>>>> felipe.o.gutierrez@gmail.com> wrote:
>>>>>
>>>>>> Hi everyone,
>>>>>>
>>>>>> My slave machine (cloud15) the datanode shows this log. It doesn't
>>>>>> connect to the master (cloud6).
>>>>>>
>>>>>> 2013-08-07 13:44:03,110 INFO org.apache.hadoop.ipc.Client: Retrying
>>>>>> connect to server: cloud15/192.168.188.15:54310. Already tried 9
>>>>>> time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10,
>>>>>> sleepTime=1 SECONDS)
>>>>>> 2013-08-07 13:44:03,110 INFO org.apache.hadoop.ipc.RPC: Server at
>>>>>> cloud15/192.168.188.15:54310 not available yet, Zzzzz...
>>>>>>
>>>>>> But when I type jps command on slave machine DataNode is running.
>>>>>> This is my file core-site.xml in slave machine (cloud15):
>>>>>> <configuration>
>>>>>> <property>
>>>>>> <name>hadoop.tmp.dir</name>
>>>>>> <value>/app/hadoop/tmp</value>
>>>>>> <description>A base for other temporary directories.</description>
>>>>>> </property>
>>>>>>
>>>>>> <property>
>>>>>> <name>fs.default.name</name>
>>>>>> <value>hdfs://cloud15:54310</value>
>>>>>> <description>The name of the default file system. A URI whose
>>>>>> scheme and authority determine the FileSystem implementation. The
>>>>>> uri's scheme determines the config property (fs.SCHEME.impl) naming
>>>>>> the FileSystem implementation class. The uri's authority is used to
>>>>>> determine the host, port, etc. for a filesystem.</description>
>>>>>> </property>
>>>>>> </configuration>
>>>>>>
>>>>>> In the master machine I just swap cloud15 to cloud6.
>>>>>> In the file /etc/host I have (192.168.188.15 cloud15) and
>>>>>> (192.168.188.6 cloud6) lines, and both machines access through ssh with
>>>>>> out password.
>>>>>>
>>>>>> Am I missing anything?
>>>>>>
>>>>>> Thanks in advance!
>>>>>> Felipe
>>>>>>
>>>>>>
>>>>>> --
>>>>>> *--
>>>>>> -- Felipe Oliveira Gutierrez
>>>>>> -- Felipe.o.Gutierrez@gmail.com
>>>>>> -- https://sites.google.com/site/lipe82/Home/diaadia*
>>>>>>
>>>>>
>>>>>
>>>>
>>>>
>>>> --
>>>> *--
>>>> -- Felipe Oliveira Gutierrez
>>>> -- Felipe.o.Gutierrez@gmail.com
>>>> -- https://sites.google.com/site/lipe82/Home/diaadia*
>>>>
>>>
>>>
>>
>
>
> --
> *--
> -- Felipe Oliveira Gutierrez
> -- Felipe.o.Gutierrez@gmail.com
> -- https://sites.google.com/site/lipe82/Home/diaadia*
>
Re: Datanode doesn't connect to Namenode
Posted by Shekhar Sharma <sh...@gmail.com>.
if you have removed this property from the slave machines then your DN
information will be created under /tmp folder and once you reboot your data
node machines, the information will be lost..
Sorry i have not seen the logs..but you dont have play around the
properties..
...see datanode will not come up in scenario, where it is not able to send
the heart beat signal to the name node at port 54310
Do step by step :
Check whether you can ping every machine and you can do SSH in password
less manner
Lets say i have one master machine whose hostname is *Master* and i have
two slave machines *Slave0* and *Slave1* ( i am assuming the OS used are
CentOS)
In *master Machine* do the following things:
*First disable the firewall by running the following command:
*as a root user run the following commands
service iptables save
service iptables stop
chkconfig iptables off
specify the following properties in the corresponding files
*mapred-site.xml*
- mapred.job.tracker (Master:54311)
*core-site.xml*
- fs.default.name (hdfs://Master:54310)
- hadoop.tmp.dir (choose some persistent directory)
*hdfs-site.xml*
- dfs.replication (3)
- dfs.block.size(64MB)
*Masters file*
- Master
*Slaves file*
- Slave0
- Slave1
*hadoop-env.sh*
- export JAVA_HOME=<Your java home directory>
In *slave0 machine
*
- Disable the firewall
- Same properties as you did in Masters machine
In *slave 1 machine*
- Disable the firewall
- Same properties as you did in Master machine
Once you start the cluster by running the command start-all.sh, check the
ports 54310 and 54311 got opened by running the command "netstat
-tuplen"..it will show whether ports are opened or not
Regards,
Som Shekhar Sharma
+91-8197243810
On Thu, Aug 8, 2013 at 4:57 PM, Felipe Gutierrez <
felipe.o.gutierrez@gmail.com> wrote:
> Thanks,
> at all files I changed to master (cloud6) and I take off this property
> <name>hadoop.tmp.dir</name>.
>
> Felipe
>
>
> On Wed, Aug 7, 2013 at 3:20 PM, Shekhar Sharma <sh...@gmail.com>wrote:
>
>> Disable the firewall on data node and namenode machines..
>> Regards,
>> Som Shekhar Sharma
>> +91-8197243810
>>
>>
>> On Wed, Aug 7, 2013 at 11:33 PM, Jitendra Yadav <
>> jeetuyadav200890@gmail.com> wrote:
>>
>>> Your hdfs name entry should be same on master and databnodes
>>>
>>> * <name>fs.default.name</name>*
>>> *<value>hdfs://cloud6:54310</value>*
>>>
>>> Thanks
>>> On Wed, Aug 7, 2013 at 11:05 PM, Felipe Gutierrez <
>>> felipe.o.gutierrez@gmail.com> wrote:
>>>
>>>> on my slave the process is running:
>>>> hduser@cloud15:/usr/local/hadoop$ jps
>>>> 19025 DataNode
>>>> 19092 Jps
>>>>
>>>>
>>>> On Wed, Aug 7, 2013 at 2:26 PM, Jitendra Yadav <
>>>> jeetuyadav200890@gmail.com> wrote:
>>>>
>>>>> Hi,
>>>>>
>>>>> Your logs showing that the process is creating IPC call not for
>>>>> namenode, it is hitting datanode itself.
>>>>>
>>>>> Check you please check you datanode processes status?.
>>>>>
>>>>> Regards
>>>>> Jitendra
>>>>>
>>>>> On Wed, Aug 7, 2013 at 10:29 PM, Felipe Gutierrez <
>>>>> felipe.o.gutierrez@gmail.com> wrote:
>>>>>
>>>>>> Hi everyone,
>>>>>>
>>>>>> My slave machine (cloud15) the datanode shows this log. It doesn't
>>>>>> connect to the master (cloud6).
>>>>>>
>>>>>> 2013-08-07 13:44:03,110 INFO org.apache.hadoop.ipc.Client: Retrying
>>>>>> connect to server: cloud15/192.168.188.15:54310. Already tried 9
>>>>>> time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10,
>>>>>> sleepTime=1 SECONDS)
>>>>>> 2013-08-07 13:44:03,110 INFO org.apache.hadoop.ipc.RPC: Server at
>>>>>> cloud15/192.168.188.15:54310 not available yet, Zzzzz...
>>>>>>
>>>>>> But when I type jps command on slave machine DataNode is running.
>>>>>> This is my file core-site.xml in slave machine (cloud15):
>>>>>> <configuration>
>>>>>> <property>
>>>>>> <name>hadoop.tmp.dir</name>
>>>>>> <value>/app/hadoop/tmp</value>
>>>>>> <description>A base for other temporary directories.</description>
>>>>>> </property>
>>>>>>
>>>>>> <property>
>>>>>> <name>fs.default.name</name>
>>>>>> <value>hdfs://cloud15:54310</value>
>>>>>> <description>The name of the default file system. A URI whose
>>>>>> scheme and authority determine the FileSystem implementation. The
>>>>>> uri's scheme determines the config property (fs.SCHEME.impl) naming
>>>>>> the FileSystem implementation class. The uri's authority is used to
>>>>>> determine the host, port, etc. for a filesystem.</description>
>>>>>> </property>
>>>>>> </configuration>
>>>>>>
>>>>>> In the master machine I just swap cloud15 to cloud6.
>>>>>> In the file /etc/host I have (192.168.188.15 cloud15) and
>>>>>> (192.168.188.6 cloud6) lines, and both machines access through ssh with
>>>>>> out password.
>>>>>>
>>>>>> Am I missing anything?
>>>>>>
>>>>>> Thanks in advance!
>>>>>> Felipe
>>>>>>
>>>>>>
>>>>>> --
>>>>>> *--
>>>>>> -- Felipe Oliveira Gutierrez
>>>>>> -- Felipe.o.Gutierrez@gmail.com
>>>>>> -- https://sites.google.com/site/lipe82/Home/diaadia*
>>>>>>
>>>>>
>>>>>
>>>>
>>>>
>>>> --
>>>> *--
>>>> -- Felipe Oliveira Gutierrez
>>>> -- Felipe.o.Gutierrez@gmail.com
>>>> -- https://sites.google.com/site/lipe82/Home/diaadia*
>>>>
>>>
>>>
>>
>
>
> --
> *--
> -- Felipe Oliveira Gutierrez
> -- Felipe.o.Gutierrez@gmail.com
> -- https://sites.google.com/site/lipe82/Home/diaadia*
>
Re: Datanode doesn't connect to Namenode
Posted by Shekhar Sharma <sh...@gmail.com>.
if you have removed this property from the slave machines then your DN
information will be created under /tmp folder and once you reboot your data
node machines, the information will be lost..
Sorry i have not seen the logs..but you dont have play around the
properties..
...see datanode will not come up in scenario, where it is not able to send
the heart beat signal to the name node at port 54310
Do step by step :
Check whether you can ping every machine and you can do SSH in password
less manner
Lets say i have one master machine whose hostname is *Master* and i have
two slave machines *Slave0* and *Slave1* ( i am assuming the OS used are
CentOS)
In *master Machine* do the following things:
*First disable the firewall by running the following command:
*as a root user run the following commands
service iptables save
service iptables stop
chkconfig iptables off
specify the following properties in the corresponding files
*mapred-site.xml*
- mapred.job.tracker (Master:54311)
*core-site.xml*
- fs.default.name (hdfs://Master:54310)
- hadoop.tmp.dir (choose some persistent directory)
*hdfs-site.xml*
- dfs.replication (3)
- dfs.block.size(64MB)
*Masters file*
- Master
*Slaves file*
- Slave0
- Slave1
*hadoop-env.sh*
- export JAVA_HOME=<Your java home directory>
In *slave0 machine
*
- Disable the firewall
- Same properties as you did in Masters machine
In *slave 1 machine*
- Disable the firewall
- Same properties as you did in Master machine
Once you start the cluster by running the command start-all.sh, check the
ports 54310 and 54311 got opened by running the command "netstat
-tuplen"..it will show whether ports are opened or not
Regards,
Som Shekhar Sharma
+91-8197243810
On Thu, Aug 8, 2013 at 4:57 PM, Felipe Gutierrez <
felipe.o.gutierrez@gmail.com> wrote:
> Thanks,
> at all files I changed to master (cloud6) and I take off this property
> <name>hadoop.tmp.dir</name>.
>
> Felipe
>
>
> On Wed, Aug 7, 2013 at 3:20 PM, Shekhar Sharma <sh...@gmail.com>wrote:
>
>> Disable the firewall on data node and namenode machines..
>> Regards,
>> Som Shekhar Sharma
>> +91-8197243810
>>
>>
>> On Wed, Aug 7, 2013 at 11:33 PM, Jitendra Yadav <
>> jeetuyadav200890@gmail.com> wrote:
>>
>>> Your hdfs name entry should be same on master and databnodes
>>>
>>> * <name>fs.default.name</name>*
>>> *<value>hdfs://cloud6:54310</value>*
>>>
>>> Thanks
>>> On Wed, Aug 7, 2013 at 11:05 PM, Felipe Gutierrez <
>>> felipe.o.gutierrez@gmail.com> wrote:
>>>
>>>> on my slave the process is running:
>>>> hduser@cloud15:/usr/local/hadoop$ jps
>>>> 19025 DataNode
>>>> 19092 Jps
>>>>
>>>>
>>>> On Wed, Aug 7, 2013 at 2:26 PM, Jitendra Yadav <
>>>> jeetuyadav200890@gmail.com> wrote:
>>>>
>>>>> Hi,
>>>>>
>>>>> Your logs showing that the process is creating IPC call not for
>>>>> namenode, it is hitting datanode itself.
>>>>>
>>>>> Check you please check you datanode processes status?.
>>>>>
>>>>> Regards
>>>>> Jitendra
>>>>>
>>>>> On Wed, Aug 7, 2013 at 10:29 PM, Felipe Gutierrez <
>>>>> felipe.o.gutierrez@gmail.com> wrote:
>>>>>
>>>>>> Hi everyone,
>>>>>>
>>>>>> My slave machine (cloud15) the datanode shows this log. It doesn't
>>>>>> connect to the master (cloud6).
>>>>>>
>>>>>> 2013-08-07 13:44:03,110 INFO org.apache.hadoop.ipc.Client: Retrying
>>>>>> connect to server: cloud15/192.168.188.15:54310. Already tried 9
>>>>>> time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10,
>>>>>> sleepTime=1 SECONDS)
>>>>>> 2013-08-07 13:44:03,110 INFO org.apache.hadoop.ipc.RPC: Server at
>>>>>> cloud15/192.168.188.15:54310 not available yet, Zzzzz...
>>>>>>
>>>>>> But when I type jps command on slave machine DataNode is running.
>>>>>> This is my file core-site.xml in slave machine (cloud15):
>>>>>> <configuration>
>>>>>> <property>
>>>>>> <name>hadoop.tmp.dir</name>
>>>>>> <value>/app/hadoop/tmp</value>
>>>>>> <description>A base for other temporary directories.</description>
>>>>>> </property>
>>>>>>
>>>>>> <property>
>>>>>> <name>fs.default.name</name>
>>>>>> <value>hdfs://cloud15:54310</value>
>>>>>> <description>The name of the default file system. A URI whose
>>>>>> scheme and authority determine the FileSystem implementation. The
>>>>>> uri's scheme determines the config property (fs.SCHEME.impl) naming
>>>>>> the FileSystem implementation class. The uri's authority is used to
>>>>>> determine the host, port, etc. for a filesystem.</description>
>>>>>> </property>
>>>>>> </configuration>
>>>>>>
>>>>>> In the master machine I just swap cloud15 to cloud6.
>>>>>> In the file /etc/host I have (192.168.188.15 cloud15) and
>>>>>> (192.168.188.6 cloud6) lines, and both machines access through ssh with
>>>>>> out password.
>>>>>>
>>>>>> Am I missing anything?
>>>>>>
>>>>>> Thanks in advance!
>>>>>> Felipe
>>>>>>
>>>>>>
>>>>>> --
>>>>>> *--
>>>>>> -- Felipe Oliveira Gutierrez
>>>>>> -- Felipe.o.Gutierrez@gmail.com
>>>>>> -- https://sites.google.com/site/lipe82/Home/diaadia*
>>>>>>
>>>>>
>>>>>
>>>>
>>>>
>>>> --
>>>> *--
>>>> -- Felipe Oliveira Gutierrez
>>>> -- Felipe.o.Gutierrez@gmail.com
>>>> -- https://sites.google.com/site/lipe82/Home/diaadia*
>>>>
>>>
>>>
>>
>
>
> --
> *--
> -- Felipe Oliveira Gutierrez
> -- Felipe.o.Gutierrez@gmail.com
> -- https://sites.google.com/site/lipe82/Home/diaadia*
>
Re: Datanode doesn't connect to Namenode
Posted by Shekhar Sharma <sh...@gmail.com>.
if you have removed this property from the slave machines then your DN
information will be created under /tmp folder and once you reboot your data
node machines, the information will be lost..
Sorry i have not seen the logs..but you dont have play around the
properties..
...see datanode will not come up in scenario, where it is not able to send
the heart beat signal to the name node at port 54310
Do step by step :
Check whether you can ping every machine and you can do SSH in password
less manner
Lets say i have one master machine whose hostname is *Master* and i have
two slave machines *Slave0* and *Slave1* ( i am assuming the OS used are
CentOS)
In *master Machine* do the following things:
*First disable the firewall by running the following command:
*as a root user run the following commands
service iptables save
service iptables stop
chkconfig iptables off
specify the following properties in the corresponding files
*mapred-site.xml*
- mapred.job.tracker (Master:54311)
*core-site.xml*
- fs.default.name (hdfs://Master:54310)
- hadoop.tmp.dir (choose some persistent directory)
*hdfs-site.xml*
- dfs.replication (3)
- dfs.block.size(64MB)
*Masters file*
- Master
*Slaves file*
- Slave0
- Slave1
*hadoop-env.sh*
- export JAVA_HOME=<Your java home directory>
In *slave0 machine
*
- Disable the firewall
- Same properties as you did in Masters machine
In *slave 1 machine*
- Disable the firewall
- Same properties as you did in Master machine
Once you start the cluster by running the command start-all.sh, check the
ports 54310 and 54311 got opened by running the command "netstat
-tuplen"..it will show whether ports are opened or not
Regards,
Som Shekhar Sharma
+91-8197243810
On Thu, Aug 8, 2013 at 4:57 PM, Felipe Gutierrez <
felipe.o.gutierrez@gmail.com> wrote:
> Thanks,
> at all files I changed to master (cloud6) and I take off this property
> <name>hadoop.tmp.dir</name>.
>
> Felipe
>
>
> On Wed, Aug 7, 2013 at 3:20 PM, Shekhar Sharma <sh...@gmail.com>wrote:
>
>> Disable the firewall on data node and namenode machines..
>> Regards,
>> Som Shekhar Sharma
>> +91-8197243810
>>
>>
>> On Wed, Aug 7, 2013 at 11:33 PM, Jitendra Yadav <
>> jeetuyadav200890@gmail.com> wrote:
>>
>>> Your hdfs name entry should be same on master and databnodes
>>>
>>> * <name>fs.default.name</name>*
>>> *<value>hdfs://cloud6:54310</value>*
>>>
>>> Thanks
>>> On Wed, Aug 7, 2013 at 11:05 PM, Felipe Gutierrez <
>>> felipe.o.gutierrez@gmail.com> wrote:
>>>
>>>> on my slave the process is running:
>>>> hduser@cloud15:/usr/local/hadoop$ jps
>>>> 19025 DataNode
>>>> 19092 Jps
>>>>
>>>>
>>>> On Wed, Aug 7, 2013 at 2:26 PM, Jitendra Yadav <
>>>> jeetuyadav200890@gmail.com> wrote:
>>>>
>>>>> Hi,
>>>>>
>>>>> Your logs showing that the process is creating IPC call not for
>>>>> namenode, it is hitting datanode itself.
>>>>>
>>>>> Check you please check you datanode processes status?.
>>>>>
>>>>> Regards
>>>>> Jitendra
>>>>>
>>>>> On Wed, Aug 7, 2013 at 10:29 PM, Felipe Gutierrez <
>>>>> felipe.o.gutierrez@gmail.com> wrote:
>>>>>
>>>>>> Hi everyone,
>>>>>>
>>>>>> My slave machine (cloud15) the datanode shows this log. It doesn't
>>>>>> connect to the master (cloud6).
>>>>>>
>>>>>> 2013-08-07 13:44:03,110 INFO org.apache.hadoop.ipc.Client: Retrying
>>>>>> connect to server: cloud15/192.168.188.15:54310. Already tried 9
>>>>>> time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10,
>>>>>> sleepTime=1 SECONDS)
>>>>>> 2013-08-07 13:44:03,110 INFO org.apache.hadoop.ipc.RPC: Server at
>>>>>> cloud15/192.168.188.15:54310 not available yet, Zzzzz...
>>>>>>
>>>>>> But when I type jps command on slave machine DataNode is running.
>>>>>> This is my file core-site.xml in slave machine (cloud15):
>>>>>> <configuration>
>>>>>> <property>
>>>>>> <name>hadoop.tmp.dir</name>
>>>>>> <value>/app/hadoop/tmp</value>
>>>>>> <description>A base for other temporary directories.</description>
>>>>>> </property>
>>>>>>
>>>>>> <property>
>>>>>> <name>fs.default.name</name>
>>>>>> <value>hdfs://cloud15:54310</value>
>>>>>> <description>The name of the default file system. A URI whose
>>>>>> scheme and authority determine the FileSystem implementation. The
>>>>>> uri's scheme determines the config property (fs.SCHEME.impl) naming
>>>>>> the FileSystem implementation class. The uri's authority is used to
>>>>>> determine the host, port, etc. for a filesystem.</description>
>>>>>> </property>
>>>>>> </configuration>
>>>>>>
>>>>>> In the master machine I just swap cloud15 to cloud6.
>>>>>> In the file /etc/host I have (192.168.188.15 cloud15) and
>>>>>> (192.168.188.6 cloud6) lines, and both machines access through ssh with
>>>>>> out password.
>>>>>>
>>>>>> Am I missing anything?
>>>>>>
>>>>>> Thanks in advance!
>>>>>> Felipe
>>>>>>
>>>>>>
>>>>>> --
>>>>>> *--
>>>>>> -- Felipe Oliveira Gutierrez
>>>>>> -- Felipe.o.Gutierrez@gmail.com
>>>>>> -- https://sites.google.com/site/lipe82/Home/diaadia*
>>>>>>
>>>>>
>>>>>
>>>>
>>>>
>>>> --
>>>> *--
>>>> -- Felipe Oliveira Gutierrez
>>>> -- Felipe.o.Gutierrez@gmail.com
>>>> -- https://sites.google.com/site/lipe82/Home/diaadia*
>>>>
>>>
>>>
>>
>
>
> --
> *--
> -- Felipe Oliveira Gutierrez
> -- Felipe.o.Gutierrez@gmail.com
> -- https://sites.google.com/site/lipe82/Home/diaadia*
>
Re: Datanode doesn't connect to Namenode
Posted by Felipe Gutierrez <fe...@gmail.com>.
Thanks,
at all files I changed to master (cloud6) and I take off this property
<name>hadoop.tmp.dir</name>.
Felipe
On Wed, Aug 7, 2013 at 3:20 PM, Shekhar Sharma <sh...@gmail.com>wrote:
> Disable the firewall on data node and namenode machines..
> Regards,
> Som Shekhar Sharma
> +91-8197243810
>
>
> On Wed, Aug 7, 2013 at 11:33 PM, Jitendra Yadav <
> jeetuyadav200890@gmail.com> wrote:
>
>> Your hdfs name entry should be same on master and databnodes
>>
>> * <name>fs.default.name</name>*
>> *<value>hdfs://cloud6:54310</value>*
>>
>> Thanks
>> On Wed, Aug 7, 2013 at 11:05 PM, Felipe Gutierrez <
>> felipe.o.gutierrez@gmail.com> wrote:
>>
>>> on my slave the process is running:
>>> hduser@cloud15:/usr/local/hadoop$ jps
>>> 19025 DataNode
>>> 19092 Jps
>>>
>>>
>>> On Wed, Aug 7, 2013 at 2:26 PM, Jitendra Yadav <
>>> jeetuyadav200890@gmail.com> wrote:
>>>
>>>> Hi,
>>>>
>>>> Your logs showing that the process is creating IPC call not for
>>>> namenode, it is hitting datanode itself.
>>>>
>>>> Check you please check you datanode processes status?.
>>>>
>>>> Regards
>>>> Jitendra
>>>>
>>>> On Wed, Aug 7, 2013 at 10:29 PM, Felipe Gutierrez <
>>>> felipe.o.gutierrez@gmail.com> wrote:
>>>>
>>>>> Hi everyone,
>>>>>
>>>>> My slave machine (cloud15) the datanode shows this log. It doesn't
>>>>> connect to the master (cloud6).
>>>>>
>>>>> 2013-08-07 13:44:03,110 INFO org.apache.hadoop.ipc.Client: Retrying
>>>>> connect to server: cloud15/192.168.188.15:54310. Already tried 9
>>>>> time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10,
>>>>> sleepTime=1 SECONDS)
>>>>> 2013-08-07 13:44:03,110 INFO org.apache.hadoop.ipc.RPC: Server at
>>>>> cloud15/192.168.188.15:54310 not available yet, Zzzzz...
>>>>>
>>>>> But when I type jps command on slave machine DataNode is running. This
>>>>> is my file core-site.xml in slave machine (cloud15):
>>>>> <configuration>
>>>>> <property>
>>>>> <name>hadoop.tmp.dir</name>
>>>>> <value>/app/hadoop/tmp</value>
>>>>> <description>A base for other temporary directories.</description>
>>>>> </property>
>>>>>
>>>>> <property>
>>>>> <name>fs.default.name</name>
>>>>> <value>hdfs://cloud15:54310</value>
>>>>> <description>The name of the default file system. A URI whose
>>>>> scheme and authority determine the FileSystem implementation. The
>>>>> uri's scheme determines the config property (fs.SCHEME.impl) naming
>>>>> the FileSystem implementation class. The uri's authority is used to
>>>>> determine the host, port, etc. for a filesystem.</description>
>>>>> </property>
>>>>> </configuration>
>>>>>
>>>>> In the master machine I just swap cloud15 to cloud6.
>>>>> In the file /etc/host I have (192.168.188.15 cloud15) and
>>>>> (192.168.188.6 cloud6) lines, and both machines access through ssh with
>>>>> out password.
>>>>>
>>>>> Am I missing anything?
>>>>>
>>>>> Thanks in advance!
>>>>> Felipe
>>>>>
>>>>>
>>>>> --
>>>>> *--
>>>>> -- Felipe Oliveira Gutierrez
>>>>> -- Felipe.o.Gutierrez@gmail.com
>>>>> -- https://sites.google.com/site/lipe82/Home/diaadia*
>>>>>
>>>>
>>>>
>>>
>>>
>>> --
>>> *--
>>> -- Felipe Oliveira Gutierrez
>>> -- Felipe.o.Gutierrez@gmail.com
>>> -- https://sites.google.com/site/lipe82/Home/diaadia*
>>>
>>
>>
>
--
*--
-- Felipe Oliveira Gutierrez
-- Felipe.o.Gutierrez@gmail.com
-- https://sites.google.com/site/lipe82/Home/diaadia*
Re: Datanode doesn't connect to Namenode
Posted by Felipe Gutierrez <fe...@gmail.com>.
Thanks,
at all files I changed to master (cloud6) and I take off this property
<name>hadoop.tmp.dir</name>.
Felipe
On Wed, Aug 7, 2013 at 3:20 PM, Shekhar Sharma <sh...@gmail.com>wrote:
> Disable the firewall on data node and namenode machines..
> Regards,
> Som Shekhar Sharma
> +91-8197243810
>
>
> On Wed, Aug 7, 2013 at 11:33 PM, Jitendra Yadav <
> jeetuyadav200890@gmail.com> wrote:
>
>> Your hdfs name entry should be same on master and databnodes
>>
>> * <name>fs.default.name</name>*
>> *<value>hdfs://cloud6:54310</value>*
>>
>> Thanks
>> On Wed, Aug 7, 2013 at 11:05 PM, Felipe Gutierrez <
>> felipe.o.gutierrez@gmail.com> wrote:
>>
>>> on my slave the process is running:
>>> hduser@cloud15:/usr/local/hadoop$ jps
>>> 19025 DataNode
>>> 19092 Jps
>>>
>>>
>>> On Wed, Aug 7, 2013 at 2:26 PM, Jitendra Yadav <
>>> jeetuyadav200890@gmail.com> wrote:
>>>
>>>> Hi,
>>>>
>>>> Your logs showing that the process is creating IPC call not for
>>>> namenode, it is hitting datanode itself.
>>>>
>>>> Check you please check you datanode processes status?.
>>>>
>>>> Regards
>>>> Jitendra
>>>>
>>>> On Wed, Aug 7, 2013 at 10:29 PM, Felipe Gutierrez <
>>>> felipe.o.gutierrez@gmail.com> wrote:
>>>>
>>>>> Hi everyone,
>>>>>
>>>>> My slave machine (cloud15) the datanode shows this log. It doesn't
>>>>> connect to the master (cloud6).
>>>>>
>>>>> 2013-08-07 13:44:03,110 INFO org.apache.hadoop.ipc.Client: Retrying
>>>>> connect to server: cloud15/192.168.188.15:54310. Already tried 9
>>>>> time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10,
>>>>> sleepTime=1 SECONDS)
>>>>> 2013-08-07 13:44:03,110 INFO org.apache.hadoop.ipc.RPC: Server at
>>>>> cloud15/192.168.188.15:54310 not available yet, Zzzzz...
>>>>>
>>>>> But when I type jps command on slave machine DataNode is running. This
>>>>> is my file core-site.xml in slave machine (cloud15):
>>>>> <configuration>
>>>>> <property>
>>>>> <name>hadoop.tmp.dir</name>
>>>>> <value>/app/hadoop/tmp</value>
>>>>> <description>A base for other temporary directories.</description>
>>>>> </property>
>>>>>
>>>>> <property>
>>>>> <name>fs.default.name</name>
>>>>> <value>hdfs://cloud15:54310</value>
>>>>> <description>The name of the default file system. A URI whose
>>>>> scheme and authority determine the FileSystem implementation. The
>>>>> uri's scheme determines the config property (fs.SCHEME.impl) naming
>>>>> the FileSystem implementation class. The uri's authority is used to
>>>>> determine the host, port, etc. for a filesystem.</description>
>>>>> </property>
>>>>> </configuration>
>>>>>
>>>>> In the master machine I just swap cloud15 to cloud6.
>>>>> In the file /etc/host I have (192.168.188.15 cloud15) and
>>>>> (192.168.188.6 cloud6) lines, and both machines access through ssh with
>>>>> out password.
>>>>>
>>>>> Am I missing anything?
>>>>>
>>>>> Thanks in advance!
>>>>> Felipe
>>>>>
>>>>>
>>>>> --
>>>>> *--
>>>>> -- Felipe Oliveira Gutierrez
>>>>> -- Felipe.o.Gutierrez@gmail.com
>>>>> -- https://sites.google.com/site/lipe82/Home/diaadia*
>>>>>
>>>>
>>>>
>>>
>>>
>>> --
>>> *--
>>> -- Felipe Oliveira Gutierrez
>>> -- Felipe.o.Gutierrez@gmail.com
>>> -- https://sites.google.com/site/lipe82/Home/diaadia*
>>>
>>
>>
>
--
*--
-- Felipe Oliveira Gutierrez
-- Felipe.o.Gutierrez@gmail.com
-- https://sites.google.com/site/lipe82/Home/diaadia*
Re: Datanode doesn't connect to Namenode
Posted by Felipe Gutierrez <fe...@gmail.com>.
Thanks,
at all files I changed to master (cloud6) and I take off this property
<name>hadoop.tmp.dir</name>.
Felipe
On Wed, Aug 7, 2013 at 3:20 PM, Shekhar Sharma <sh...@gmail.com>wrote:
> Disable the firewall on data node and namenode machines..
> Regards,
> Som Shekhar Sharma
> +91-8197243810
>
>
> On Wed, Aug 7, 2013 at 11:33 PM, Jitendra Yadav <
> jeetuyadav200890@gmail.com> wrote:
>
>> Your hdfs name entry should be same on master and databnodes
>>
>> * <name>fs.default.name</name>*
>> *<value>hdfs://cloud6:54310</value>*
>>
>> Thanks
>> On Wed, Aug 7, 2013 at 11:05 PM, Felipe Gutierrez <
>> felipe.o.gutierrez@gmail.com> wrote:
>>
>>> on my slave the process is running:
>>> hduser@cloud15:/usr/local/hadoop$ jps
>>> 19025 DataNode
>>> 19092 Jps
>>>
>>>
>>> On Wed, Aug 7, 2013 at 2:26 PM, Jitendra Yadav <
>>> jeetuyadav200890@gmail.com> wrote:
>>>
>>>> Hi,
>>>>
>>>> Your logs showing that the process is creating IPC call not for
>>>> namenode, it is hitting datanode itself.
>>>>
>>>> Check you please check you datanode processes status?.
>>>>
>>>> Regards
>>>> Jitendra
>>>>
>>>> On Wed, Aug 7, 2013 at 10:29 PM, Felipe Gutierrez <
>>>> felipe.o.gutierrez@gmail.com> wrote:
>>>>
>>>>> Hi everyone,
>>>>>
>>>>> My slave machine (cloud15) the datanode shows this log. It doesn't
>>>>> connect to the master (cloud6).
>>>>>
>>>>> 2013-08-07 13:44:03,110 INFO org.apache.hadoop.ipc.Client: Retrying
>>>>> connect to server: cloud15/192.168.188.15:54310. Already tried 9
>>>>> time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10,
>>>>> sleepTime=1 SECONDS)
>>>>> 2013-08-07 13:44:03,110 INFO org.apache.hadoop.ipc.RPC: Server at
>>>>> cloud15/192.168.188.15:54310 not available yet, Zzzzz...
>>>>>
>>>>> But when I type jps command on slave machine DataNode is running. This
>>>>> is my file core-site.xml in slave machine (cloud15):
>>>>> <configuration>
>>>>> <property>
>>>>> <name>hadoop.tmp.dir</name>
>>>>> <value>/app/hadoop/tmp</value>
>>>>> <description>A base for other temporary directories.</description>
>>>>> </property>
>>>>>
>>>>> <property>
>>>>> <name>fs.default.name</name>
>>>>> <value>hdfs://cloud15:54310</value>
>>>>> <description>The name of the default file system. A URI whose
>>>>> scheme and authority determine the FileSystem implementation. The
>>>>> uri's scheme determines the config property (fs.SCHEME.impl) naming
>>>>> the FileSystem implementation class. The uri's authority is used to
>>>>> determine the host, port, etc. for a filesystem.</description>
>>>>> </property>
>>>>> </configuration>
>>>>>
>>>>> In the master machine I just swap cloud15 to cloud6.
>>>>> In the file /etc/host I have (192.168.188.15 cloud15) and
>>>>> (192.168.188.6 cloud6) lines, and both machines access through ssh with
>>>>> out password.
>>>>>
>>>>> Am I missing anything?
>>>>>
>>>>> Thanks in advance!
>>>>> Felipe
>>>>>
>>>>>
>>>>> --
>>>>> *--
>>>>> -- Felipe Oliveira Gutierrez
>>>>> -- Felipe.o.Gutierrez@gmail.com
>>>>> -- https://sites.google.com/site/lipe82/Home/diaadia*
>>>>>
>>>>
>>>>
>>>
>>>
>>> --
>>> *--
>>> -- Felipe Oliveira Gutierrez
>>> -- Felipe.o.Gutierrez@gmail.com
>>> -- https://sites.google.com/site/lipe82/Home/diaadia*
>>>
>>
>>
>
--
*--
-- Felipe Oliveira Gutierrez
-- Felipe.o.Gutierrez@gmail.com
-- https://sites.google.com/site/lipe82/Home/diaadia*
Re: Datanode doesn't connect to Namenode
Posted by Felipe Gutierrez <fe...@gmail.com>.
Thanks,
at all files I changed to master (cloud6) and I take off this property
<name>hadoop.tmp.dir</name>.
Felipe
On Wed, Aug 7, 2013 at 3:20 PM, Shekhar Sharma <sh...@gmail.com>wrote:
> Disable the firewall on data node and namenode machines..
> Regards,
> Som Shekhar Sharma
> +91-8197243810
>
>
> On Wed, Aug 7, 2013 at 11:33 PM, Jitendra Yadav <
> jeetuyadav200890@gmail.com> wrote:
>
>> Your hdfs name entry should be same on master and databnodes
>>
>> * <name>fs.default.name</name>*
>> *<value>hdfs://cloud6:54310</value>*
>>
>> Thanks
>> On Wed, Aug 7, 2013 at 11:05 PM, Felipe Gutierrez <
>> felipe.o.gutierrez@gmail.com> wrote:
>>
>>> on my slave the process is running:
>>> hduser@cloud15:/usr/local/hadoop$ jps
>>> 19025 DataNode
>>> 19092 Jps
>>>
>>>
>>> On Wed, Aug 7, 2013 at 2:26 PM, Jitendra Yadav <
>>> jeetuyadav200890@gmail.com> wrote:
>>>
>>>> Hi,
>>>>
>>>> Your logs showing that the process is creating IPC call not for
>>>> namenode, it is hitting datanode itself.
>>>>
>>>> Check you please check you datanode processes status?.
>>>>
>>>> Regards
>>>> Jitendra
>>>>
>>>> On Wed, Aug 7, 2013 at 10:29 PM, Felipe Gutierrez <
>>>> felipe.o.gutierrez@gmail.com> wrote:
>>>>
>>>>> Hi everyone,
>>>>>
>>>>> My slave machine (cloud15) the datanode shows this log. It doesn't
>>>>> connect to the master (cloud6).
>>>>>
>>>>> 2013-08-07 13:44:03,110 INFO org.apache.hadoop.ipc.Client: Retrying
>>>>> connect to server: cloud15/192.168.188.15:54310. Already tried 9
>>>>> time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10,
>>>>> sleepTime=1 SECONDS)
>>>>> 2013-08-07 13:44:03,110 INFO org.apache.hadoop.ipc.RPC: Server at
>>>>> cloud15/192.168.188.15:54310 not available yet, Zzzzz...
>>>>>
>>>>> But when I type jps command on slave machine DataNode is running. This
>>>>> is my file core-site.xml in slave machine (cloud15):
>>>>> <configuration>
>>>>> <property>
>>>>> <name>hadoop.tmp.dir</name>
>>>>> <value>/app/hadoop/tmp</value>
>>>>> <description>A base for other temporary directories.</description>
>>>>> </property>
>>>>>
>>>>> <property>
>>>>> <name>fs.default.name</name>
>>>>> <value>hdfs://cloud15:54310</value>
>>>>> <description>The name of the default file system. A URI whose
>>>>> scheme and authority determine the FileSystem implementation. The
>>>>> uri's scheme determines the config property (fs.SCHEME.impl) naming
>>>>> the FileSystem implementation class. The uri's authority is used to
>>>>> determine the host, port, etc. for a filesystem.</description>
>>>>> </property>
>>>>> </configuration>
>>>>>
>>>>> In the master machine I just swap cloud15 to cloud6.
>>>>> In the file /etc/host I have (192.168.188.15 cloud15) and
>>>>> (192.168.188.6 cloud6) lines, and both machines access through ssh with
>>>>> out password.
>>>>>
>>>>> Am I missing anything?
>>>>>
>>>>> Thanks in advance!
>>>>> Felipe
>>>>>
>>>>>
>>>>> --
>>>>> *--
>>>>> -- Felipe Oliveira Gutierrez
>>>>> -- Felipe.o.Gutierrez@gmail.com
>>>>> -- https://sites.google.com/site/lipe82/Home/diaadia*
>>>>>
>>>>
>>>>
>>>
>>>
>>> --
>>> *--
>>> -- Felipe Oliveira Gutierrez
>>> -- Felipe.o.Gutierrez@gmail.com
>>> -- https://sites.google.com/site/lipe82/Home/diaadia*
>>>
>>
>>
>
--
*--
-- Felipe Oliveira Gutierrez
-- Felipe.o.Gutierrez@gmail.com
-- https://sites.google.com/site/lipe82/Home/diaadia*
Re: Datanode doesn't connect to Namenode
Posted by Shekhar Sharma <sh...@gmail.com>.
Disable the firewall on data node and namenode machines..
Regards,
Som Shekhar Sharma
+91-8197243810
On Wed, Aug 7, 2013 at 11:33 PM, Jitendra Yadav
<je...@gmail.com>wrote:
> Your hdfs name entry should be same on master and databnodes
>
> * <name>fs.default.name</name>*
> *<value>hdfs://cloud6:54310</value>*
>
> Thanks
> On Wed, Aug 7, 2013 at 11:05 PM, Felipe Gutierrez <
> felipe.o.gutierrez@gmail.com> wrote:
>
>> on my slave the process is running:
>> hduser@cloud15:/usr/local/hadoop$ jps
>> 19025 DataNode
>> 19092 Jps
>>
>>
>> On Wed, Aug 7, 2013 at 2:26 PM, Jitendra Yadav <
>> jeetuyadav200890@gmail.com> wrote:
>>
>>> Hi,
>>>
>>> Your logs showing that the process is creating IPC call not for
>>> namenode, it is hitting datanode itself.
>>>
>>> Check you please check you datanode processes status?.
>>>
>>> Regards
>>> Jitendra
>>>
>>> On Wed, Aug 7, 2013 at 10:29 PM, Felipe Gutierrez <
>>> felipe.o.gutierrez@gmail.com> wrote:
>>>
>>>> Hi everyone,
>>>>
>>>> My slave machine (cloud15) the datanode shows this log. It doesn't
>>>> connect to the master (cloud6).
>>>>
>>>> 2013-08-07 13:44:03,110 INFO org.apache.hadoop.ipc.Client: Retrying
>>>> connect to server: cloud15/192.168.188.15:54310. Already tried 9
>>>> time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10,
>>>> sleepTime=1 SECONDS)
>>>> 2013-08-07 13:44:03,110 INFO org.apache.hadoop.ipc.RPC: Server at
>>>> cloud15/192.168.188.15:54310 not available yet, Zzzzz...
>>>>
>>>> But when I type jps command on slave machine DataNode is running. This
>>>> is my file core-site.xml in slave machine (cloud15):
>>>> <configuration>
>>>> <property>
>>>> <name>hadoop.tmp.dir</name>
>>>> <value>/app/hadoop/tmp</value>
>>>> <description>A base for other temporary directories.</description>
>>>> </property>
>>>>
>>>> <property>
>>>> <name>fs.default.name</name>
>>>> <value>hdfs://cloud15:54310</value>
>>>> <description>The name of the default file system. A URI whose
>>>> scheme and authority determine the FileSystem implementation. The
>>>> uri's scheme determines the config property (fs.SCHEME.impl) naming
>>>> the FileSystem implementation class. The uri's authority is used to
>>>> determine the host, port, etc. for a filesystem.</description>
>>>> </property>
>>>> </configuration>
>>>>
>>>> In the master machine I just swap cloud15 to cloud6.
>>>> In the file /etc/host I have (192.168.188.15 cloud15) and
>>>> (192.168.188.6 cloud6) lines, and both machines access through ssh with
>>>> out password.
>>>>
>>>> Am I missing anything?
>>>>
>>>> Thanks in advance!
>>>> Felipe
>>>>
>>>>
>>>> --
>>>> *--
>>>> -- Felipe Oliveira Gutierrez
>>>> -- Felipe.o.Gutierrez@gmail.com
>>>> -- https://sites.google.com/site/lipe82/Home/diaadia*
>>>>
>>>
>>>
>>
>>
>> --
>> *--
>> -- Felipe Oliveira Gutierrez
>> -- Felipe.o.Gutierrez@gmail.com
>> -- https://sites.google.com/site/lipe82/Home/diaadia*
>>
>
>
Re: Datanode doesn't connect to Namenode
Posted by Shekhar Sharma <sh...@gmail.com>.
Disable the firewall on data node and namenode machines..
Regards,
Som Shekhar Sharma
+91-8197243810
On Wed, Aug 7, 2013 at 11:33 PM, Jitendra Yadav
<je...@gmail.com>wrote:
> Your hdfs name entry should be same on master and databnodes
>
> * <name>fs.default.name</name>*
> *<value>hdfs://cloud6:54310</value>*
>
> Thanks
> On Wed, Aug 7, 2013 at 11:05 PM, Felipe Gutierrez <
> felipe.o.gutierrez@gmail.com> wrote:
>
>> on my slave the process is running:
>> hduser@cloud15:/usr/local/hadoop$ jps
>> 19025 DataNode
>> 19092 Jps
>>
>>
>> On Wed, Aug 7, 2013 at 2:26 PM, Jitendra Yadav <
>> jeetuyadav200890@gmail.com> wrote:
>>
>>> Hi,
>>>
>>> Your logs showing that the process is creating IPC call not for
>>> namenode, it is hitting datanode itself.
>>>
>>> Check you please check you datanode processes status?.
>>>
>>> Regards
>>> Jitendra
>>>
>>> On Wed, Aug 7, 2013 at 10:29 PM, Felipe Gutierrez <
>>> felipe.o.gutierrez@gmail.com> wrote:
>>>
>>>> Hi everyone,
>>>>
>>>> My slave machine (cloud15) the datanode shows this log. It doesn't
>>>> connect to the master (cloud6).
>>>>
>>>> 2013-08-07 13:44:03,110 INFO org.apache.hadoop.ipc.Client: Retrying
>>>> connect to server: cloud15/192.168.188.15:54310. Already tried 9
>>>> time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10,
>>>> sleepTime=1 SECONDS)
>>>> 2013-08-07 13:44:03,110 INFO org.apache.hadoop.ipc.RPC: Server at
>>>> cloud15/192.168.188.15:54310 not available yet, Zzzzz...
>>>>
>>>> But when I type jps command on slave machine DataNode is running. This
>>>> is my file core-site.xml in slave machine (cloud15):
>>>> <configuration>
>>>> <property>
>>>> <name>hadoop.tmp.dir</name>
>>>> <value>/app/hadoop/tmp</value>
>>>> <description>A base for other temporary directories.</description>
>>>> </property>
>>>>
>>>> <property>
>>>> <name>fs.default.name</name>
>>>> <value>hdfs://cloud15:54310</value>
>>>> <description>The name of the default file system. A URI whose
>>>> scheme and authority determine the FileSystem implementation. The
>>>> uri's scheme determines the config property (fs.SCHEME.impl) naming
>>>> the FileSystem implementation class. The uri's authority is used to
>>>> determine the host, port, etc. for a filesystem.</description>
>>>> </property>
>>>> </configuration>
>>>>
>>>> In the master machine I just swap cloud15 to cloud6.
>>>> In the file /etc/host I have (192.168.188.15 cloud15) and
>>>> (192.168.188.6 cloud6) lines, and both machines access through ssh with
>>>> out password.
>>>>
>>>> Am I missing anything?
>>>>
>>>> Thanks in advance!
>>>> Felipe
>>>>
>>>>
>>>> --
>>>> *--
>>>> -- Felipe Oliveira Gutierrez
>>>> -- Felipe.o.Gutierrez@gmail.com
>>>> -- https://sites.google.com/site/lipe82/Home/diaadia*
>>>>
>>>
>>>
>>
>>
>> --
>> *--
>> -- Felipe Oliveira Gutierrez
>> -- Felipe.o.Gutierrez@gmail.com
>> -- https://sites.google.com/site/lipe82/Home/diaadia*
>>
>
>
Re: Datanode doesn't connect to Namenode
Posted by Shekhar Sharma <sh...@gmail.com>.
Disable the firewall on data node and namenode machines..
Regards,
Som Shekhar Sharma
+91-8197243810
On Wed, Aug 7, 2013 at 11:33 PM, Jitendra Yadav
<je...@gmail.com>wrote:
> Your hdfs name entry should be same on master and databnodes
>
> * <name>fs.default.name</name>*
> *<value>hdfs://cloud6:54310</value>*
>
> Thanks
> On Wed, Aug 7, 2013 at 11:05 PM, Felipe Gutierrez <
> felipe.o.gutierrez@gmail.com> wrote:
>
>> on my slave the process is running:
>> hduser@cloud15:/usr/local/hadoop$ jps
>> 19025 DataNode
>> 19092 Jps
>>
>>
>> On Wed, Aug 7, 2013 at 2:26 PM, Jitendra Yadav <
>> jeetuyadav200890@gmail.com> wrote:
>>
>>> Hi,
>>>
>>> Your logs showing that the process is creating IPC call not for
>>> namenode, it is hitting datanode itself.
>>>
>>> Check you please check you datanode processes status?.
>>>
>>> Regards
>>> Jitendra
>>>
>>> On Wed, Aug 7, 2013 at 10:29 PM, Felipe Gutierrez <
>>> felipe.o.gutierrez@gmail.com> wrote:
>>>
>>>> Hi everyone,
>>>>
>>>> My slave machine (cloud15) the datanode shows this log. It doesn't
>>>> connect to the master (cloud6).
>>>>
>>>> 2013-08-07 13:44:03,110 INFO org.apache.hadoop.ipc.Client: Retrying
>>>> connect to server: cloud15/192.168.188.15:54310. Already tried 9
>>>> time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10,
>>>> sleepTime=1 SECONDS)
>>>> 2013-08-07 13:44:03,110 INFO org.apache.hadoop.ipc.RPC: Server at
>>>> cloud15/192.168.188.15:54310 not available yet, Zzzzz...
>>>>
>>>> But when I type jps command on slave machine DataNode is running. This
>>>> is my file core-site.xml in slave machine (cloud15):
>>>> <configuration>
>>>> <property>
>>>> <name>hadoop.tmp.dir</name>
>>>> <value>/app/hadoop/tmp</value>
>>>> <description>A base for other temporary directories.</description>
>>>> </property>
>>>>
>>>> <property>
>>>> <name>fs.default.name</name>
>>>> <value>hdfs://cloud15:54310</value>
>>>> <description>The name of the default file system. A URI whose
>>>> scheme and authority determine the FileSystem implementation. The
>>>> uri's scheme determines the config property (fs.SCHEME.impl) naming
>>>> the FileSystem implementation class. The uri's authority is used to
>>>> determine the host, port, etc. for a filesystem.</description>
>>>> </property>
>>>> </configuration>
>>>>
>>>> In the master machine I just swap cloud15 to cloud6.
>>>> In the file /etc/host I have (192.168.188.15 cloud15) and
>>>> (192.168.188.6 cloud6) lines, and both machines access through ssh with
>>>> out password.
>>>>
>>>> Am I missing anything?
>>>>
>>>> Thanks in advance!
>>>> Felipe
>>>>
>>>>
>>>> --
>>>> *--
>>>> -- Felipe Oliveira Gutierrez
>>>> -- Felipe.o.Gutierrez@gmail.com
>>>> -- https://sites.google.com/site/lipe82/Home/diaadia*
>>>>
>>>
>>>
>>
>>
>> --
>> *--
>> -- Felipe Oliveira Gutierrez
>> -- Felipe.o.Gutierrez@gmail.com
>> -- https://sites.google.com/site/lipe82/Home/diaadia*
>>
>
>
Re: Datanode doesn't connect to Namenode
Posted by Shekhar Sharma <sh...@gmail.com>.
Disable the firewall on data node and namenode machines..
Regards,
Som Shekhar Sharma
+91-8197243810
On Wed, Aug 7, 2013 at 11:33 PM, Jitendra Yadav
<je...@gmail.com>wrote:
> Your hdfs name entry should be same on master and databnodes
>
> * <name>fs.default.name</name>*
> *<value>hdfs://cloud6:54310</value>*
>
> Thanks
> On Wed, Aug 7, 2013 at 11:05 PM, Felipe Gutierrez <
> felipe.o.gutierrez@gmail.com> wrote:
>
>> on my slave the process is running:
>> hduser@cloud15:/usr/local/hadoop$ jps
>> 19025 DataNode
>> 19092 Jps
>>
>>
>> On Wed, Aug 7, 2013 at 2:26 PM, Jitendra Yadav <
>> jeetuyadav200890@gmail.com> wrote:
>>
>>> Hi,
>>>
>>> Your logs showing that the process is creating IPC call not for
>>> namenode, it is hitting datanode itself.
>>>
>>> Check you please check you datanode processes status?.
>>>
>>> Regards
>>> Jitendra
>>>
>>> On Wed, Aug 7, 2013 at 10:29 PM, Felipe Gutierrez <
>>> felipe.o.gutierrez@gmail.com> wrote:
>>>
>>>> Hi everyone,
>>>>
>>>> My slave machine (cloud15) the datanode shows this log. It doesn't
>>>> connect to the master (cloud6).
>>>>
>>>> 2013-08-07 13:44:03,110 INFO org.apache.hadoop.ipc.Client: Retrying
>>>> connect to server: cloud15/192.168.188.15:54310. Already tried 9
>>>> time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10,
>>>> sleepTime=1 SECONDS)
>>>> 2013-08-07 13:44:03,110 INFO org.apache.hadoop.ipc.RPC: Server at
>>>> cloud15/192.168.188.15:54310 not available yet, Zzzzz...
>>>>
>>>> But when I type jps command on slave machine DataNode is running. This
>>>> is my file core-site.xml in slave machine (cloud15):
>>>> <configuration>
>>>> <property>
>>>> <name>hadoop.tmp.dir</name>
>>>> <value>/app/hadoop/tmp</value>
>>>> <description>A base for other temporary directories.</description>
>>>> </property>
>>>>
>>>> <property>
>>>> <name>fs.default.name</name>
>>>> <value>hdfs://cloud15:54310</value>
>>>> <description>The name of the default file system. A URI whose
>>>> scheme and authority determine the FileSystem implementation. The
>>>> uri's scheme determines the config property (fs.SCHEME.impl) naming
>>>> the FileSystem implementation class. The uri's authority is used to
>>>> determine the host, port, etc. for a filesystem.</description>
>>>> </property>
>>>> </configuration>
>>>>
>>>> In the master machine I just swap cloud15 to cloud6.
>>>> In the file /etc/host I have (192.168.188.15 cloud15) and
>>>> (192.168.188.6 cloud6) lines, and both machines access through ssh with
>>>> out password.
>>>>
>>>> Am I missing anything?
>>>>
>>>> Thanks in advance!
>>>> Felipe
>>>>
>>>>
>>>> --
>>>> *--
>>>> -- Felipe Oliveira Gutierrez
>>>> -- Felipe.o.Gutierrez@gmail.com
>>>> -- https://sites.google.com/site/lipe82/Home/diaadia*
>>>>
>>>
>>>
>>
>>
>> --
>> *--
>> -- Felipe Oliveira Gutierrez
>> -- Felipe.o.Gutierrez@gmail.com
>> -- https://sites.google.com/site/lipe82/Home/diaadia*
>>
>
>
Re: Datanode doesn't connect to Namenode
Posted by Jitendra Yadav <je...@gmail.com>.
Your hdfs name entry should be same on master and databnodes
* <name>fs.default.name</name>*
*<value>hdfs://cloud6:54310</value>*
Thanks
On Wed, Aug 7, 2013 at 11:05 PM, Felipe Gutierrez <
felipe.o.gutierrez@gmail.com> wrote:
> on my slave the process is running:
> hduser@cloud15:/usr/local/hadoop$ jps
> 19025 DataNode
> 19092 Jps
>
>
> On Wed, Aug 7, 2013 at 2:26 PM, Jitendra Yadav <jeetuyadav200890@gmail.com
> > wrote:
>
>> Hi,
>>
>> Your logs showing that the process is creating IPC call not for namenode,
>> it is hitting datanode itself.
>>
>> Check you please check you datanode processes status?.
>>
>> Regards
>> Jitendra
>>
>> On Wed, Aug 7, 2013 at 10:29 PM, Felipe Gutierrez <
>> felipe.o.gutierrez@gmail.com> wrote:
>>
>>> Hi everyone,
>>>
>>> My slave machine (cloud15) the datanode shows this log. It doesn't
>>> connect to the master (cloud6).
>>>
>>> 2013-08-07 13:44:03,110 INFO org.apache.hadoop.ipc.Client: Retrying
>>> connect to server: cloud15/192.168.188.15:54310. Already tried 9
>>> time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10,
>>> sleepTime=1 SECONDS)
>>> 2013-08-07 13:44:03,110 INFO org.apache.hadoop.ipc.RPC: Server at
>>> cloud15/192.168.188.15:54310 not available yet, Zzzzz...
>>>
>>> But when I type jps command on slave machine DataNode is running. This
>>> is my file core-site.xml in slave machine (cloud15):
>>> <configuration>
>>> <property>
>>> <name>hadoop.tmp.dir</name>
>>> <value>/app/hadoop/tmp</value>
>>> <description>A base for other temporary directories.</description>
>>> </property>
>>>
>>> <property>
>>> <name>fs.default.name</name>
>>> <value>hdfs://cloud15:54310</value>
>>> <description>The name of the default file system. A URI whose
>>> scheme and authority determine the FileSystem implementation. The
>>> uri's scheme determines the config property (fs.SCHEME.impl) naming
>>> the FileSystem implementation class. The uri's authority is used to
>>> determine the host, port, etc. for a filesystem.</description>
>>> </property>
>>> </configuration>
>>>
>>> In the master machine I just swap cloud15 to cloud6.
>>> In the file /etc/host I have (192.168.188.15 cloud15) and
>>> (192.168.188.6 cloud6) lines, and both machines access through ssh with
>>> out password.
>>>
>>> Am I missing anything?
>>>
>>> Thanks in advance!
>>> Felipe
>>>
>>>
>>> --
>>> *--
>>> -- Felipe Oliveira Gutierrez
>>> -- Felipe.o.Gutierrez@gmail.com
>>> -- https://sites.google.com/site/lipe82/Home/diaadia*
>>>
>>
>>
>
>
> --
> *--
> -- Felipe Oliveira Gutierrez
> -- Felipe.o.Gutierrez@gmail.com
> -- https://sites.google.com/site/lipe82/Home/diaadia*
>
Re: Datanode doesn't connect to Namenode
Posted by Jitendra Yadav <je...@gmail.com>.
Your hdfs name entry should be same on master and databnodes
* <name>fs.default.name</name>*
*<value>hdfs://cloud6:54310</value>*
Thanks
On Wed, Aug 7, 2013 at 11:05 PM, Felipe Gutierrez <
felipe.o.gutierrez@gmail.com> wrote:
> on my slave the process is running:
> hduser@cloud15:/usr/local/hadoop$ jps
> 19025 DataNode
> 19092 Jps
>
>
> On Wed, Aug 7, 2013 at 2:26 PM, Jitendra Yadav <jeetuyadav200890@gmail.com
> > wrote:
>
>> Hi,
>>
>> Your logs showing that the process is creating IPC call not for namenode,
>> it is hitting datanode itself.
>>
>> Check you please check you datanode processes status?.
>>
>> Regards
>> Jitendra
>>
>> On Wed, Aug 7, 2013 at 10:29 PM, Felipe Gutierrez <
>> felipe.o.gutierrez@gmail.com> wrote:
>>
>>> Hi everyone,
>>>
>>> My slave machine (cloud15) the datanode shows this log. It doesn't
>>> connect to the master (cloud6).
>>>
>>> 2013-08-07 13:44:03,110 INFO org.apache.hadoop.ipc.Client: Retrying
>>> connect to server: cloud15/192.168.188.15:54310. Already tried 9
>>> time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10,
>>> sleepTime=1 SECONDS)
>>> 2013-08-07 13:44:03,110 INFO org.apache.hadoop.ipc.RPC: Server at
>>> cloud15/192.168.188.15:54310 not available yet, Zzzzz...
>>>
>>> But when I type jps command on slave machine DataNode is running. This
>>> is my file core-site.xml in slave machine (cloud15):
>>> <configuration>
>>> <property>
>>> <name>hadoop.tmp.dir</name>
>>> <value>/app/hadoop/tmp</value>
>>> <description>A base for other temporary directories.</description>
>>> </property>
>>>
>>> <property>
>>> <name>fs.default.name</name>
>>> <value>hdfs://cloud15:54310</value>
>>> <description>The name of the default file system. A URI whose
>>> scheme and authority determine the FileSystem implementation. The
>>> uri's scheme determines the config property (fs.SCHEME.impl) naming
>>> the FileSystem implementation class. The uri's authority is used to
>>> determine the host, port, etc. for a filesystem.</description>
>>> </property>
>>> </configuration>
>>>
>>> In the master machine I just swap cloud15 to cloud6.
>>> In the file /etc/host I have (192.168.188.15 cloud15) and
>>> (192.168.188.6 cloud6) lines, and both machines access through ssh with
>>> out password.
>>>
>>> Am I missing anything?
>>>
>>> Thanks in advance!
>>> Felipe
>>>
>>>
>>> --
>>> *--
>>> -- Felipe Oliveira Gutierrez
>>> -- Felipe.o.Gutierrez@gmail.com
>>> -- https://sites.google.com/site/lipe82/Home/diaadia*
>>>
>>
>>
>
>
> --
> *--
> -- Felipe Oliveira Gutierrez
> -- Felipe.o.Gutierrez@gmail.com
> -- https://sites.google.com/site/lipe82/Home/diaadia*
>
Re: Datanode doesn't connect to Namenode
Posted by Jitendra Yadav <je...@gmail.com>.
Your hdfs name entry should be same on master and databnodes
* <name>fs.default.name</name>*
*<value>hdfs://cloud6:54310</value>*
Thanks
On Wed, Aug 7, 2013 at 11:05 PM, Felipe Gutierrez <
felipe.o.gutierrez@gmail.com> wrote:
> on my slave the process is running:
> hduser@cloud15:/usr/local/hadoop$ jps
> 19025 DataNode
> 19092 Jps
>
>
> On Wed, Aug 7, 2013 at 2:26 PM, Jitendra Yadav <jeetuyadav200890@gmail.com
> > wrote:
>
>> Hi,
>>
>> Your logs showing that the process is creating IPC call not for namenode,
>> it is hitting datanode itself.
>>
>> Check you please check you datanode processes status?.
>>
>> Regards
>> Jitendra
>>
>> On Wed, Aug 7, 2013 at 10:29 PM, Felipe Gutierrez <
>> felipe.o.gutierrez@gmail.com> wrote:
>>
>>> Hi everyone,
>>>
>>> My slave machine (cloud15) the datanode shows this log. It doesn't
>>> connect to the master (cloud6).
>>>
>>> 2013-08-07 13:44:03,110 INFO org.apache.hadoop.ipc.Client: Retrying
>>> connect to server: cloud15/192.168.188.15:54310. Already tried 9
>>> time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10,
>>> sleepTime=1 SECONDS)
>>> 2013-08-07 13:44:03,110 INFO org.apache.hadoop.ipc.RPC: Server at
>>> cloud15/192.168.188.15:54310 not available yet, Zzzzz...
>>>
>>> But when I type jps command on slave machine DataNode is running. This
>>> is my file core-site.xml in slave machine (cloud15):
>>> <configuration>
>>> <property>
>>> <name>hadoop.tmp.dir</name>
>>> <value>/app/hadoop/tmp</value>
>>> <description>A base for other temporary directories.</description>
>>> </property>
>>>
>>> <property>
>>> <name>fs.default.name</name>
>>> <value>hdfs://cloud15:54310</value>
>>> <description>The name of the default file system. A URI whose
>>> scheme and authority determine the FileSystem implementation. The
>>> uri's scheme determines the config property (fs.SCHEME.impl) naming
>>> the FileSystem implementation class. The uri's authority is used to
>>> determine the host, port, etc. for a filesystem.</description>
>>> </property>
>>> </configuration>
>>>
>>> In the master machine I just swap cloud15 to cloud6.
>>> In the file /etc/host I have (192.168.188.15 cloud15) and
>>> (192.168.188.6 cloud6) lines, and both machines access through ssh with
>>> out password.
>>>
>>> Am I missing anything?
>>>
>>> Thanks in advance!
>>> Felipe
>>>
>>>
>>> --
>>> *--
>>> -- Felipe Oliveira Gutierrez
>>> -- Felipe.o.Gutierrez@gmail.com
>>> -- https://sites.google.com/site/lipe82/Home/diaadia*
>>>
>>
>>
>
>
> --
> *--
> -- Felipe Oliveira Gutierrez
> -- Felipe.o.Gutierrez@gmail.com
> -- https://sites.google.com/site/lipe82/Home/diaadia*
>
Re: Datanode doesn't connect to Namenode
Posted by Jitendra Yadav <je...@gmail.com>.
Your hdfs name entry should be same on master and databnodes
* <name>fs.default.name</name>*
*<value>hdfs://cloud6:54310</value>*
Thanks
On Wed, Aug 7, 2013 at 11:05 PM, Felipe Gutierrez <
felipe.o.gutierrez@gmail.com> wrote:
> on my slave the process is running:
> hduser@cloud15:/usr/local/hadoop$ jps
> 19025 DataNode
> 19092 Jps
>
>
> On Wed, Aug 7, 2013 at 2:26 PM, Jitendra Yadav <jeetuyadav200890@gmail.com
> > wrote:
>
>> Hi,
>>
>> Your logs showing that the process is creating IPC call not for namenode,
>> it is hitting datanode itself.
>>
>> Check you please check you datanode processes status?.
>>
>> Regards
>> Jitendra
>>
>> On Wed, Aug 7, 2013 at 10:29 PM, Felipe Gutierrez <
>> felipe.o.gutierrez@gmail.com> wrote:
>>
>>> Hi everyone,
>>>
>>> My slave machine (cloud15) the datanode shows this log. It doesn't
>>> connect to the master (cloud6).
>>>
>>> 2013-08-07 13:44:03,110 INFO org.apache.hadoop.ipc.Client: Retrying
>>> connect to server: cloud15/192.168.188.15:54310. Already tried 9
>>> time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10,
>>> sleepTime=1 SECONDS)
>>> 2013-08-07 13:44:03,110 INFO org.apache.hadoop.ipc.RPC: Server at
>>> cloud15/192.168.188.15:54310 not available yet, Zzzzz...
>>>
>>> But when I type jps command on slave machine DataNode is running. This
>>> is my file core-site.xml in slave machine (cloud15):
>>> <configuration>
>>> <property>
>>> <name>hadoop.tmp.dir</name>
>>> <value>/app/hadoop/tmp</value>
>>> <description>A base for other temporary directories.</description>
>>> </property>
>>>
>>> <property>
>>> <name>fs.default.name</name>
>>> <value>hdfs://cloud15:54310</value>
>>> <description>The name of the default file system. A URI whose
>>> scheme and authority determine the FileSystem implementation. The
>>> uri's scheme determines the config property (fs.SCHEME.impl) naming
>>> the FileSystem implementation class. The uri's authority is used to
>>> determine the host, port, etc. for a filesystem.</description>
>>> </property>
>>> </configuration>
>>>
>>> In the master machine I just swap cloud15 to cloud6.
>>> In the file /etc/host I have (192.168.188.15 cloud15) and
>>> (192.168.188.6 cloud6) lines, and both machines access through ssh with
>>> out password.
>>>
>>> Am I missing anything?
>>>
>>> Thanks in advance!
>>> Felipe
>>>
>>>
>>> --
>>> *--
>>> -- Felipe Oliveira Gutierrez
>>> -- Felipe.o.Gutierrez@gmail.com
>>> -- https://sites.google.com/site/lipe82/Home/diaadia*
>>>
>>
>>
>
>
> --
> *--
> -- Felipe Oliveira Gutierrez
> -- Felipe.o.Gutierrez@gmail.com
> -- https://sites.google.com/site/lipe82/Home/diaadia*
>
Re: Datanode doesn't connect to Namenode
Posted by Felipe Gutierrez <fe...@gmail.com>.
on my slave the process is running:
hduser@cloud15:/usr/local/hadoop$ jps
19025 DataNode
19092 Jps
On Wed, Aug 7, 2013 at 2:26 PM, Jitendra Yadav
<je...@gmail.com>wrote:
> Hi,
>
> Your logs showing that the process is creating IPC call not for namenode,
> it is hitting datanode itself.
>
> Check you please check you datanode processes status?.
>
> Regards
> Jitendra
>
> On Wed, Aug 7, 2013 at 10:29 PM, Felipe Gutierrez <
> felipe.o.gutierrez@gmail.com> wrote:
>
>> Hi everyone,
>>
>> My slave machine (cloud15) the datanode shows this log. It doesn't
>> connect to the master (cloud6).
>>
>> 2013-08-07 13:44:03,110 INFO org.apache.hadoop.ipc.Client: Retrying
>> connect to server: cloud15/192.168.188.15:54310. Already tried 9
>> time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10,
>> sleepTime=1 SECONDS)
>> 2013-08-07 13:44:03,110 INFO org.apache.hadoop.ipc.RPC: Server at cloud15/
>> 192.168.188.15:54310 not available yet, Zzzzz...
>>
>> But when I type jps command on slave machine DataNode is running. This is
>> my file core-site.xml in slave machine (cloud15):
>> <configuration>
>> <property>
>> <name>hadoop.tmp.dir</name>
>> <value>/app/hadoop/tmp</value>
>> <description>A base for other temporary directories.</description>
>> </property>
>>
>> <property>
>> <name>fs.default.name</name>
>> <value>hdfs://cloud15:54310</value>
>> <description>The name of the default file system. A URI whose
>> scheme and authority determine the FileSystem implementation. The
>> uri's scheme determines the config property (fs.SCHEME.impl) naming
>> the FileSystem implementation class. The uri's authority is used to
>> determine the host, port, etc. for a filesystem.</description>
>> </property>
>> </configuration>
>>
>> In the master machine I just swap cloud15 to cloud6.
>> In the file /etc/host I have (192.168.188.15 cloud15) and (192.168.188.6
>> cloud6) lines, and both machines access through ssh with out password.
>>
>> Am I missing anything?
>>
>> Thanks in advance!
>> Felipe
>>
>>
>> --
>> *--
>> -- Felipe Oliveira Gutierrez
>> -- Felipe.o.Gutierrez@gmail.com
>> -- https://sites.google.com/site/lipe82/Home/diaadia*
>>
>
>
--
*--
-- Felipe Oliveira Gutierrez
-- Felipe.o.Gutierrez@gmail.com
-- https://sites.google.com/site/lipe82/Home/diaadia*
Re: Datanode doesn't connect to Namenode
Posted by Felipe Gutierrez <fe...@gmail.com>.
on my slave the process is running:
hduser@cloud15:/usr/local/hadoop$ jps
19025 DataNode
19092 Jps
On Wed, Aug 7, 2013 at 2:26 PM, Jitendra Yadav
<je...@gmail.com>wrote:
> Hi,
>
> Your logs showing that the process is creating IPC call not for namenode,
> it is hitting datanode itself.
>
> Check you please check you datanode processes status?.
>
> Regards
> Jitendra
>
> On Wed, Aug 7, 2013 at 10:29 PM, Felipe Gutierrez <
> felipe.o.gutierrez@gmail.com> wrote:
>
>> Hi everyone,
>>
>> My slave machine (cloud15) the datanode shows this log. It doesn't
>> connect to the master (cloud6).
>>
>> 2013-08-07 13:44:03,110 INFO org.apache.hadoop.ipc.Client: Retrying
>> connect to server: cloud15/192.168.188.15:54310. Already tried 9
>> time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10,
>> sleepTime=1 SECONDS)
>> 2013-08-07 13:44:03,110 INFO org.apache.hadoop.ipc.RPC: Server at cloud15/
>> 192.168.188.15:54310 not available yet, Zzzzz...
>>
>> But when I type jps command on slave machine DataNode is running. This is
>> my file core-site.xml in slave machine (cloud15):
>> <configuration>
>> <property>
>> <name>hadoop.tmp.dir</name>
>> <value>/app/hadoop/tmp</value>
>> <description>A base for other temporary directories.</description>
>> </property>
>>
>> <property>
>> <name>fs.default.name</name>
>> <value>hdfs://cloud15:54310</value>
>> <description>The name of the default file system. A URI whose
>> scheme and authority determine the FileSystem implementation. The
>> uri's scheme determines the config property (fs.SCHEME.impl) naming
>> the FileSystem implementation class. The uri's authority is used to
>> determine the host, port, etc. for a filesystem.</description>
>> </property>
>> </configuration>
>>
>> In the master machine I just swap cloud15 to cloud6.
>> In the file /etc/host I have (192.168.188.15 cloud15) and (192.168.188.6
>> cloud6) lines, and both machines access through ssh with out password.
>>
>> Am I missing anything?
>>
>> Thanks in advance!
>> Felipe
>>
>>
>> --
>> *--
>> -- Felipe Oliveira Gutierrez
>> -- Felipe.o.Gutierrez@gmail.com
>> -- https://sites.google.com/site/lipe82/Home/diaadia*
>>
>
>
--
*--
-- Felipe Oliveira Gutierrez
-- Felipe.o.Gutierrez@gmail.com
-- https://sites.google.com/site/lipe82/Home/diaadia*
Re: Datanode doesn't connect to Namenode
Posted by Felipe Gutierrez <fe...@gmail.com>.
on my slave the process is running:
hduser@cloud15:/usr/local/hadoop$ jps
19025 DataNode
19092 Jps
On Wed, Aug 7, 2013 at 2:26 PM, Jitendra Yadav
<je...@gmail.com>wrote:
> Hi,
>
> Your logs showing that the process is creating IPC call not for namenode,
> it is hitting datanode itself.
>
> Check you please check you datanode processes status?.
>
> Regards
> Jitendra
>
> On Wed, Aug 7, 2013 at 10:29 PM, Felipe Gutierrez <
> felipe.o.gutierrez@gmail.com> wrote:
>
>> Hi everyone,
>>
>> My slave machine (cloud15) the datanode shows this log. It doesn't
>> connect to the master (cloud6).
>>
>> 2013-08-07 13:44:03,110 INFO org.apache.hadoop.ipc.Client: Retrying
>> connect to server: cloud15/192.168.188.15:54310. Already tried 9
>> time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10,
>> sleepTime=1 SECONDS)
>> 2013-08-07 13:44:03,110 INFO org.apache.hadoop.ipc.RPC: Server at cloud15/
>> 192.168.188.15:54310 not available yet, Zzzzz...
>>
>> But when I type jps command on slave machine DataNode is running. This is
>> my file core-site.xml in slave machine (cloud15):
>> <configuration>
>> <property>
>> <name>hadoop.tmp.dir</name>
>> <value>/app/hadoop/tmp</value>
>> <description>A base for other temporary directories.</description>
>> </property>
>>
>> <property>
>> <name>fs.default.name</name>
>> <value>hdfs://cloud15:54310</value>
>> <description>The name of the default file system. A URI whose
>> scheme and authority determine the FileSystem implementation. The
>> uri's scheme determines the config property (fs.SCHEME.impl) naming
>> the FileSystem implementation class. The uri's authority is used to
>> determine the host, port, etc. for a filesystem.</description>
>> </property>
>> </configuration>
>>
>> In the master machine I just swap cloud15 to cloud6.
>> In the file /etc/host I have (192.168.188.15 cloud15) and (192.168.188.6
>> cloud6) lines, and both machines access through ssh with out password.
>>
>> Am I missing anything?
>>
>> Thanks in advance!
>> Felipe
>>
>>
>> --
>> *--
>> -- Felipe Oliveira Gutierrez
>> -- Felipe.o.Gutierrez@gmail.com
>> -- https://sites.google.com/site/lipe82/Home/diaadia*
>>
>
>
--
*--
-- Felipe Oliveira Gutierrez
-- Felipe.o.Gutierrez@gmail.com
-- https://sites.google.com/site/lipe82/Home/diaadia*
Re: Datanode doesn't connect to Namenode
Posted by Felipe Gutierrez <fe...@gmail.com>.
on my slave the process is running:
hduser@cloud15:/usr/local/hadoop$ jps
19025 DataNode
19092 Jps
On Wed, Aug 7, 2013 at 2:26 PM, Jitendra Yadav
<je...@gmail.com>wrote:
> Hi,
>
> Your logs showing that the process is creating IPC call not for namenode,
> it is hitting datanode itself.
>
> Check you please check you datanode processes status?.
>
> Regards
> Jitendra
>
> On Wed, Aug 7, 2013 at 10:29 PM, Felipe Gutierrez <
> felipe.o.gutierrez@gmail.com> wrote:
>
>> Hi everyone,
>>
>> My slave machine (cloud15) the datanode shows this log. It doesn't
>> connect to the master (cloud6).
>>
>> 2013-08-07 13:44:03,110 INFO org.apache.hadoop.ipc.Client: Retrying
>> connect to server: cloud15/192.168.188.15:54310. Already tried 9
>> time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10,
>> sleepTime=1 SECONDS)
>> 2013-08-07 13:44:03,110 INFO org.apache.hadoop.ipc.RPC: Server at cloud15/
>> 192.168.188.15:54310 not available yet, Zzzzz...
>>
>> But when I type jps command on slave machine DataNode is running. This is
>> my file core-site.xml in slave machine (cloud15):
>> <configuration>
>> <property>
>> <name>hadoop.tmp.dir</name>
>> <value>/app/hadoop/tmp</value>
>> <description>A base for other temporary directories.</description>
>> </property>
>>
>> <property>
>> <name>fs.default.name</name>
>> <value>hdfs://cloud15:54310</value>
>> <description>The name of the default file system. A URI whose
>> scheme and authority determine the FileSystem implementation. The
>> uri's scheme determines the config property (fs.SCHEME.impl) naming
>> the FileSystem implementation class. The uri's authority is used to
>> determine the host, port, etc. for a filesystem.</description>
>> </property>
>> </configuration>
>>
>> In the master machine I just swap cloud15 to cloud6.
>> In the file /etc/host I have (192.168.188.15 cloud15) and (192.168.188.6
>> cloud6) lines, and both machines access through ssh with out password.
>>
>> Am I missing anything?
>>
>> Thanks in advance!
>> Felipe
>>
>>
>> --
>> *--
>> -- Felipe Oliveira Gutierrez
>> -- Felipe.o.Gutierrez@gmail.com
>> -- https://sites.google.com/site/lipe82/Home/diaadia*
>>
>
>
--
*--
-- Felipe Oliveira Gutierrez
-- Felipe.o.Gutierrez@gmail.com
-- https://sites.google.com/site/lipe82/Home/diaadia*
Re: Datanode doesn't connect to Namenode
Posted by Jitendra Yadav <je...@gmail.com>.
Hi,
Your logs showing that the process is creating IPC call not for namenode,
it is hitting datanode itself.
Check you please check you datanode processes status?.
Regards
Jitendra
On Wed, Aug 7, 2013 at 10:29 PM, Felipe Gutierrez <
felipe.o.gutierrez@gmail.com> wrote:
> Hi everyone,
>
> My slave machine (cloud15) the datanode shows this log. It doesn't connect
> to the master (cloud6).
>
> 2013-08-07 13:44:03,110 INFO org.apache.hadoop.ipc.Client: Retrying
> connect to server: cloud15/192.168.188.15:54310. Already tried 9 time(s);
> retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10,
> sleepTime=1 SECONDS)
> 2013-08-07 13:44:03,110 INFO org.apache.hadoop.ipc.RPC: Server at cloud15/
> 192.168.188.15:54310 not available yet, Zzzzz...
>
> But when I type jps command on slave machine DataNode is running. This is
> my file core-site.xml in slave machine (cloud15):
> <configuration>
> <property>
> <name>hadoop.tmp.dir</name>
> <value>/app/hadoop/tmp</value>
> <description>A base for other temporary directories.</description>
> </property>
>
> <property>
> <name>fs.default.name</name>
> <value>hdfs://cloud15:54310</value>
> <description>The name of the default file system. A URI whose
> scheme and authority determine the FileSystem implementation. The
> uri's scheme determines the config property (fs.SCHEME.impl) naming
> the FileSystem implementation class. The uri's authority is used to
> determine the host, port, etc. for a filesystem.</description>
> </property>
> </configuration>
>
> In the master machine I just swap cloud15 to cloud6.
> In the file /etc/host I have (192.168.188.15 cloud15) and (192.168.188.6
> cloud6) lines, and both machines access through ssh with out password.
>
> Am I missing anything?
>
> Thanks in advance!
> Felipe
>
>
> --
> *--
> -- Felipe Oliveira Gutierrez
> -- Felipe.o.Gutierrez@gmail.com
> -- https://sites.google.com/site/lipe82/Home/diaadia*
>
Re: Datanode doesn't connect to Namenode
Posted by Jitendra Yadav <je...@gmail.com>.
I'm not able to see tasktraker process on your datanode.
On Wed, Aug 7, 2013 at 11:14 PM, Felipe Gutierrez <
felipe.o.gutierrez@gmail.com> wrote:
> yes, in slave I type:
> <name>fs.default.name</name>
> <value>hdfs://cloud15:54310</value>
>
> in master I type:
> <name>fs.default.name</name>
> <value>hdfs://cloud6:54310</value>
>
> If I type cloud6 on both configurations, the slave doesn't start.
>
>
>
>
> On Wed, Aug 7, 2013 at 2:40 PM, Sivaram RL <si...@gmail.com> wrote:
>
>> Hi ,
>>
>> your configuration of Datanode shows
>>
>> <name>fs.default.name</name>
>> <value>hdfs://cloud15:54310</value>
>>
>> But you have said Namenode is configured on master (cloud6). Can you
>> check the configuration again ?
>>
>>
>> Regards,
>> Sivaram R L
>>
>>
>> On Wed, Aug 7, 2013 at 10:29 PM, Felipe Gutierrez <
>> felipe.o.gutierrez@gmail.com> wrote:
>>
>>> Hi everyone,
>>>
>>> My slave machine (cloud15) the datanode shows this log. It doesn't
>>> connect to the master (cloud6).
>>>
>>> 2013-08-07 13:44:03,110 INFO org.apache.hadoop.ipc.Client: Retrying
>>> connect to server: cloud15/192.168.188.15:54310. Already tried 9
>>> time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10,
>>> sleepTime=1 SECONDS)
>>> 2013-08-07 13:44:03,110 INFO org.apache.hadoop.ipc.RPC: Server at
>>> cloud15/192.168.188.15:54310 not available yet, Zzzzz...
>>>
>>> But when I type jps command on slave machine DataNode is running. This
>>> is my file core-site.xml in slave machine (cloud15):
>>> <configuration>
>>> <property>
>>> <name>hadoop.tmp.dir</name>
>>> <value>/app/hadoop/tmp</value>
>>> <description>A base for other temporary directories.</description>
>>> </property>
>>>
>>> <property>
>>> <name>fs.default.name</name>
>>> <value>hdfs://cloud15:54310</value>
>>> <description>The name of the default file system. A URI whose
>>> scheme and authority determine the FileSystem implementation. The
>>> uri's scheme determines the config property (fs.SCHEME.impl) naming
>>> the FileSystem implementation class. The uri's authority is used to
>>> determine the host, port, etc. for a filesystem.</description>
>>> </property>
>>> </configuration>
>>>
>>> In the master machine I just swap cloud15 to cloud6.
>>> In the file /etc/host I have (192.168.188.15 cloud15) and
>>> (192.168.188.6 cloud6) lines, and both machines access through ssh with
>>> out password.
>>>
>>> Am I missing anything?
>>>
>>> Thanks in advance!
>>> Felipe
>>>
>>>
>>> --
>>> *--
>>> -- Felipe Oliveira Gutierrez
>>> -- Felipe.o.Gutierrez@gmail.com
>>> -- https://sites.google.com/site/lipe82/Home/diaadia*
>>>
>>
>>
>
>
> --
> *--
> -- Felipe Oliveira Gutierrez
> -- Felipe.o.Gutierrez@gmail.com
> -- https://sites.google.com/site/lipe82/Home/diaadia*
>
Re: Datanode doesn't connect to Namenode
Posted by Jitendra Yadav <je...@gmail.com>.
I'm not able to see tasktraker process on your datanode.
On Wed, Aug 7, 2013 at 11:14 PM, Felipe Gutierrez <
felipe.o.gutierrez@gmail.com> wrote:
> yes, in slave I type:
> <name>fs.default.name</name>
> <value>hdfs://cloud15:54310</value>
>
> in master I type:
> <name>fs.default.name</name>
> <value>hdfs://cloud6:54310</value>
>
> If I type cloud6 on both configurations, the slave doesn't start.
>
>
>
>
> On Wed, Aug 7, 2013 at 2:40 PM, Sivaram RL <si...@gmail.com> wrote:
>
>> Hi ,
>>
>> your configuration of Datanode shows
>>
>> <name>fs.default.name</name>
>> <value>hdfs://cloud15:54310</value>
>>
>> But you have said Namenode is configured on master (cloud6). Can you
>> check the configuration again ?
>>
>>
>> Regards,
>> Sivaram R L
>>
>>
>> On Wed, Aug 7, 2013 at 10:29 PM, Felipe Gutierrez <
>> felipe.o.gutierrez@gmail.com> wrote:
>>
>>> Hi everyone,
>>>
>>> My slave machine (cloud15) the datanode shows this log. It doesn't
>>> connect to the master (cloud6).
>>>
>>> 2013-08-07 13:44:03,110 INFO org.apache.hadoop.ipc.Client: Retrying
>>> connect to server: cloud15/192.168.188.15:54310. Already tried 9
>>> time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10,
>>> sleepTime=1 SECONDS)
>>> 2013-08-07 13:44:03,110 INFO org.apache.hadoop.ipc.RPC: Server at
>>> cloud15/192.168.188.15:54310 not available yet, Zzzzz...
>>>
>>> But when I type jps command on slave machine DataNode is running. This
>>> is my file core-site.xml in slave machine (cloud15):
>>> <configuration>
>>> <property>
>>> <name>hadoop.tmp.dir</name>
>>> <value>/app/hadoop/tmp</value>
>>> <description>A base for other temporary directories.</description>
>>> </property>
>>>
>>> <property>
>>> <name>fs.default.name</name>
>>> <value>hdfs://cloud15:54310</value>
>>> <description>The name of the default file system. A URI whose
>>> scheme and authority determine the FileSystem implementation. The
>>> uri's scheme determines the config property (fs.SCHEME.impl) naming
>>> the FileSystem implementation class. The uri's authority is used to
>>> determine the host, port, etc. for a filesystem.</description>
>>> </property>
>>> </configuration>
>>>
>>> In the master machine I just swap cloud15 to cloud6.
>>> In the file /etc/host I have (192.168.188.15 cloud15) and
>>> (192.168.188.6 cloud6) lines, and both machines access through ssh with
>>> out password.
>>>
>>> Am I missing anything?
>>>
>>> Thanks in advance!
>>> Felipe
>>>
>>>
>>> --
>>> *--
>>> -- Felipe Oliveira Gutierrez
>>> -- Felipe.o.Gutierrez@gmail.com
>>> -- https://sites.google.com/site/lipe82/Home/diaadia*
>>>
>>
>>
>
>
> --
> *--
> -- Felipe Oliveira Gutierrez
> -- Felipe.o.Gutierrez@gmail.com
> -- https://sites.google.com/site/lipe82/Home/diaadia*
>
Re: Datanode doesn't connect to Namenode
Posted by Jitendra Yadav <je...@gmail.com>.
I'm not able to see tasktraker process on your datanode.
On Wed, Aug 7, 2013 at 11:14 PM, Felipe Gutierrez <
felipe.o.gutierrez@gmail.com> wrote:
> yes, in slave I type:
> <name>fs.default.name</name>
> <value>hdfs://cloud15:54310</value>
>
> in master I type:
> <name>fs.default.name</name>
> <value>hdfs://cloud6:54310</value>
>
> If I type cloud6 on both configurations, the slave doesn't start.
>
>
>
>
> On Wed, Aug 7, 2013 at 2:40 PM, Sivaram RL <si...@gmail.com> wrote:
>
>> Hi ,
>>
>> your configuration of Datanode shows
>>
>> <name>fs.default.name</name>
>> <value>hdfs://cloud15:54310</value>
>>
>> But you have said Namenode is configured on master (cloud6). Can you
>> check the configuration again ?
>>
>>
>> Regards,
>> Sivaram R L
>>
>>
>> On Wed, Aug 7, 2013 at 10:29 PM, Felipe Gutierrez <
>> felipe.o.gutierrez@gmail.com> wrote:
>>
>>> Hi everyone,
>>>
>>> My slave machine (cloud15) the datanode shows this log. It doesn't
>>> connect to the master (cloud6).
>>>
>>> 2013-08-07 13:44:03,110 INFO org.apache.hadoop.ipc.Client: Retrying
>>> connect to server: cloud15/192.168.188.15:54310. Already tried 9
>>> time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10,
>>> sleepTime=1 SECONDS)
>>> 2013-08-07 13:44:03,110 INFO org.apache.hadoop.ipc.RPC: Server at
>>> cloud15/192.168.188.15:54310 not available yet, Zzzzz...
>>>
>>> But when I type jps command on slave machine DataNode is running. This
>>> is my file core-site.xml in slave machine (cloud15):
>>> <configuration>
>>> <property>
>>> <name>hadoop.tmp.dir</name>
>>> <value>/app/hadoop/tmp</value>
>>> <description>A base for other temporary directories.</description>
>>> </property>
>>>
>>> <property>
>>> <name>fs.default.name</name>
>>> <value>hdfs://cloud15:54310</value>
>>> <description>The name of the default file system. A URI whose
>>> scheme and authority determine the FileSystem implementation. The
>>> uri's scheme determines the config property (fs.SCHEME.impl) naming
>>> the FileSystem implementation class. The uri's authority is used to
>>> determine the host, port, etc. for a filesystem.</description>
>>> </property>
>>> </configuration>
>>>
>>> In the master machine I just swap cloud15 to cloud6.
>>> In the file /etc/host I have (192.168.188.15 cloud15) and
>>> (192.168.188.6 cloud6) lines, and both machines access through ssh with
>>> out password.
>>>
>>> Am I missing anything?
>>>
>>> Thanks in advance!
>>> Felipe
>>>
>>>
>>> --
>>> *--
>>> -- Felipe Oliveira Gutierrez
>>> -- Felipe.o.Gutierrez@gmail.com
>>> -- https://sites.google.com/site/lipe82/Home/diaadia*
>>>
>>
>>
>
>
> --
> *--
> -- Felipe Oliveira Gutierrez
> -- Felipe.o.Gutierrez@gmail.com
> -- https://sites.google.com/site/lipe82/Home/diaadia*
>
Re: Datanode doesn't connect to Namenode
Posted by Jitendra Yadav <je...@gmail.com>.
I'm not able to see tasktraker process on your datanode.
On Wed, Aug 7, 2013 at 11:14 PM, Felipe Gutierrez <
felipe.o.gutierrez@gmail.com> wrote:
> yes, in slave I type:
> <name>fs.default.name</name>
> <value>hdfs://cloud15:54310</value>
>
> in master I type:
> <name>fs.default.name</name>
> <value>hdfs://cloud6:54310</value>
>
> If I type cloud6 on both configurations, the slave doesn't start.
>
>
>
>
> On Wed, Aug 7, 2013 at 2:40 PM, Sivaram RL <si...@gmail.com> wrote:
>
>> Hi ,
>>
>> your configuration of Datanode shows
>>
>> <name>fs.default.name</name>
>> <value>hdfs://cloud15:54310</value>
>>
>> But you have said Namenode is configured on master (cloud6). Can you
>> check the configuration again ?
>>
>>
>> Regards,
>> Sivaram R L
>>
>>
>> On Wed, Aug 7, 2013 at 10:29 PM, Felipe Gutierrez <
>> felipe.o.gutierrez@gmail.com> wrote:
>>
>>> Hi everyone,
>>>
>>> My slave machine (cloud15) the datanode shows this log. It doesn't
>>> connect to the master (cloud6).
>>>
>>> 2013-08-07 13:44:03,110 INFO org.apache.hadoop.ipc.Client: Retrying
>>> connect to server: cloud15/192.168.188.15:54310. Already tried 9
>>> time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10,
>>> sleepTime=1 SECONDS)
>>> 2013-08-07 13:44:03,110 INFO org.apache.hadoop.ipc.RPC: Server at
>>> cloud15/192.168.188.15:54310 not available yet, Zzzzz...
>>>
>>> But when I type jps command on slave machine DataNode is running. This
>>> is my file core-site.xml in slave machine (cloud15):
>>> <configuration>
>>> <property>
>>> <name>hadoop.tmp.dir</name>
>>> <value>/app/hadoop/tmp</value>
>>> <description>A base for other temporary directories.</description>
>>> </property>
>>>
>>> <property>
>>> <name>fs.default.name</name>
>>> <value>hdfs://cloud15:54310</value>
>>> <description>The name of the default file system. A URI whose
>>> scheme and authority determine the FileSystem implementation. The
>>> uri's scheme determines the config property (fs.SCHEME.impl) naming
>>> the FileSystem implementation class. The uri's authority is used to
>>> determine the host, port, etc. for a filesystem.</description>
>>> </property>
>>> </configuration>
>>>
>>> In the master machine I just swap cloud15 to cloud6.
>>> In the file /etc/host I have (192.168.188.15 cloud15) and
>>> (192.168.188.6 cloud6) lines, and both machines access through ssh with
>>> out password.
>>>
>>> Am I missing anything?
>>>
>>> Thanks in advance!
>>> Felipe
>>>
>>>
>>> --
>>> *--
>>> -- Felipe Oliveira Gutierrez
>>> -- Felipe.o.Gutierrez@gmail.com
>>> -- https://sites.google.com/site/lipe82/Home/diaadia*
>>>
>>
>>
>
>
> --
> *--
> -- Felipe Oliveira Gutierrez
> -- Felipe.o.Gutierrez@gmail.com
> -- https://sites.google.com/site/lipe82/Home/diaadia*
>
Re: Datanode doesn't connect to Namenode
Posted by Felipe Gutierrez <fe...@gmail.com>.
yes, in slave I type:
<name>fs.default.name</name>
<value>hdfs://cloud15:54310</value>
in master I type:
<name>fs.default.name</name>
<value>hdfs://cloud6:54310</value>
If I type cloud6 on both configurations, the slave doesn't start.
On Wed, Aug 7, 2013 at 2:40 PM, Sivaram RL <si...@gmail.com> wrote:
> Hi ,
>
> your configuration of Datanode shows
>
> <name>fs.default.name</name>
> <value>hdfs://cloud15:54310</value>
>
> But you have said Namenode is configured on master (cloud6). Can you check
> the configuration again ?
>
>
> Regards,
> Sivaram R L
>
>
> On Wed, Aug 7, 2013 at 10:29 PM, Felipe Gutierrez <
> felipe.o.gutierrez@gmail.com> wrote:
>
>> Hi everyone,
>>
>> My slave machine (cloud15) the datanode shows this log. It doesn't
>> connect to the master (cloud6).
>>
>> 2013-08-07 13:44:03,110 INFO org.apache.hadoop.ipc.Client: Retrying
>> connect to server: cloud15/192.168.188.15:54310. Already tried 9
>> time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10,
>> sleepTime=1 SECONDS)
>> 2013-08-07 13:44:03,110 INFO org.apache.hadoop.ipc.RPC: Server at cloud15/
>> 192.168.188.15:54310 not available yet, Zzzzz...
>>
>> But when I type jps command on slave machine DataNode is running. This is
>> my file core-site.xml in slave machine (cloud15):
>> <configuration>
>> <property>
>> <name>hadoop.tmp.dir</name>
>> <value>/app/hadoop/tmp</value>
>> <description>A base for other temporary directories.</description>
>> </property>
>>
>> <property>
>> <name>fs.default.name</name>
>> <value>hdfs://cloud15:54310</value>
>> <description>The name of the default file system. A URI whose
>> scheme and authority determine the FileSystem implementation. The
>> uri's scheme determines the config property (fs.SCHEME.impl) naming
>> the FileSystem implementation class. The uri's authority is used to
>> determine the host, port, etc. for a filesystem.</description>
>> </property>
>> </configuration>
>>
>> In the master machine I just swap cloud15 to cloud6.
>> In the file /etc/host I have (192.168.188.15 cloud15) and (192.168.188.6
>> cloud6) lines, and both machines access through ssh with out password.
>>
>> Am I missing anything?
>>
>> Thanks in advance!
>> Felipe
>>
>>
>> --
>> *--
>> -- Felipe Oliveira Gutierrez
>> -- Felipe.o.Gutierrez@gmail.com
>> -- https://sites.google.com/site/lipe82/Home/diaadia*
>>
>
>
--
*--
-- Felipe Oliveira Gutierrez
-- Felipe.o.Gutierrez@gmail.com
-- https://sites.google.com/site/lipe82/Home/diaadia*
Re: Datanode doesn't connect to Namenode
Posted by Felipe Gutierrez <fe...@gmail.com>.
yes, in slave I type:
<name>fs.default.name</name>
<value>hdfs://cloud15:54310</value>
in master I type:
<name>fs.default.name</name>
<value>hdfs://cloud6:54310</value>
If I type cloud6 on both configurations, the slave doesn't start.
On Wed, Aug 7, 2013 at 2:40 PM, Sivaram RL <si...@gmail.com> wrote:
> Hi ,
>
> your configuration of Datanode shows
>
> <name>fs.default.name</name>
> <value>hdfs://cloud15:54310</value>
>
> But you have said Namenode is configured on master (cloud6). Can you check
> the configuration again ?
>
>
> Regards,
> Sivaram R L
>
>
> On Wed, Aug 7, 2013 at 10:29 PM, Felipe Gutierrez <
> felipe.o.gutierrez@gmail.com> wrote:
>
>> Hi everyone,
>>
>> My slave machine (cloud15) the datanode shows this log. It doesn't
>> connect to the master (cloud6).
>>
>> 2013-08-07 13:44:03,110 INFO org.apache.hadoop.ipc.Client: Retrying
>> connect to server: cloud15/192.168.188.15:54310. Already tried 9
>> time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10,
>> sleepTime=1 SECONDS)
>> 2013-08-07 13:44:03,110 INFO org.apache.hadoop.ipc.RPC: Server at cloud15/
>> 192.168.188.15:54310 not available yet, Zzzzz...
>>
>> But when I type jps command on slave machine DataNode is running. This is
>> my file core-site.xml in slave machine (cloud15):
>> <configuration>
>> <property>
>> <name>hadoop.tmp.dir</name>
>> <value>/app/hadoop/tmp</value>
>> <description>A base for other temporary directories.</description>
>> </property>
>>
>> <property>
>> <name>fs.default.name</name>
>> <value>hdfs://cloud15:54310</value>
>> <description>The name of the default file system. A URI whose
>> scheme and authority determine the FileSystem implementation. The
>> uri's scheme determines the config property (fs.SCHEME.impl) naming
>> the FileSystem implementation class. The uri's authority is used to
>> determine the host, port, etc. for a filesystem.</description>
>> </property>
>> </configuration>
>>
>> In the master machine I just swap cloud15 to cloud6.
>> In the file /etc/host I have (192.168.188.15 cloud15) and (192.168.188.6
>> cloud6) lines, and both machines access through ssh with out password.
>>
>> Am I missing anything?
>>
>> Thanks in advance!
>> Felipe
>>
>>
>> --
>> *--
>> -- Felipe Oliveira Gutierrez
>> -- Felipe.o.Gutierrez@gmail.com
>> -- https://sites.google.com/site/lipe82/Home/diaadia*
>>
>
>
--
*--
-- Felipe Oliveira Gutierrez
-- Felipe.o.Gutierrez@gmail.com
-- https://sites.google.com/site/lipe82/Home/diaadia*
Re: Datanode doesn't connect to Namenode
Posted by Felipe Gutierrez <fe...@gmail.com>.
yes, in slave I type:
<name>fs.default.name</name>
<value>hdfs://cloud15:54310</value>
in master I type:
<name>fs.default.name</name>
<value>hdfs://cloud6:54310</value>
If I type cloud6 on both configurations, the slave doesn't start.
On Wed, Aug 7, 2013 at 2:40 PM, Sivaram RL <si...@gmail.com> wrote:
> Hi ,
>
> your configuration of Datanode shows
>
> <name>fs.default.name</name>
> <value>hdfs://cloud15:54310</value>
>
> But you have said Namenode is configured on master (cloud6). Can you check
> the configuration again ?
>
>
> Regards,
> Sivaram R L
>
>
> On Wed, Aug 7, 2013 at 10:29 PM, Felipe Gutierrez <
> felipe.o.gutierrez@gmail.com> wrote:
>
>> Hi everyone,
>>
>> My slave machine (cloud15) the datanode shows this log. It doesn't
>> connect to the master (cloud6).
>>
>> 2013-08-07 13:44:03,110 INFO org.apache.hadoop.ipc.Client: Retrying
>> connect to server: cloud15/192.168.188.15:54310. Already tried 9
>> time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10,
>> sleepTime=1 SECONDS)
>> 2013-08-07 13:44:03,110 INFO org.apache.hadoop.ipc.RPC: Server at cloud15/
>> 192.168.188.15:54310 not available yet, Zzzzz...
>>
>> But when I type jps command on slave machine DataNode is running. This is
>> my file core-site.xml in slave machine (cloud15):
>> <configuration>
>> <property>
>> <name>hadoop.tmp.dir</name>
>> <value>/app/hadoop/tmp</value>
>> <description>A base for other temporary directories.</description>
>> </property>
>>
>> <property>
>> <name>fs.default.name</name>
>> <value>hdfs://cloud15:54310</value>
>> <description>The name of the default file system. A URI whose
>> scheme and authority determine the FileSystem implementation. The
>> uri's scheme determines the config property (fs.SCHEME.impl) naming
>> the FileSystem implementation class. The uri's authority is used to
>> determine the host, port, etc. for a filesystem.</description>
>> </property>
>> </configuration>
>>
>> In the master machine I just swap cloud15 to cloud6.
>> In the file /etc/host I have (192.168.188.15 cloud15) and (192.168.188.6
>> cloud6) lines, and both machines access through ssh with out password.
>>
>> Am I missing anything?
>>
>> Thanks in advance!
>> Felipe
>>
>>
>> --
>> *--
>> -- Felipe Oliveira Gutierrez
>> -- Felipe.o.Gutierrez@gmail.com
>> -- https://sites.google.com/site/lipe82/Home/diaadia*
>>
>
>
--
*--
-- Felipe Oliveira Gutierrez
-- Felipe.o.Gutierrez@gmail.com
-- https://sites.google.com/site/lipe82/Home/diaadia*
Re: Datanode doesn't connect to Namenode
Posted by Felipe Gutierrez <fe...@gmail.com>.
yes, in slave I type:
<name>fs.default.name</name>
<value>hdfs://cloud15:54310</value>
in master I type:
<name>fs.default.name</name>
<value>hdfs://cloud6:54310</value>
If I type cloud6 on both configurations, the slave doesn't start.
On Wed, Aug 7, 2013 at 2:40 PM, Sivaram RL <si...@gmail.com> wrote:
> Hi ,
>
> your configuration of Datanode shows
>
> <name>fs.default.name</name>
> <value>hdfs://cloud15:54310</value>
>
> But you have said Namenode is configured on master (cloud6). Can you check
> the configuration again ?
>
>
> Regards,
> Sivaram R L
>
>
> On Wed, Aug 7, 2013 at 10:29 PM, Felipe Gutierrez <
> felipe.o.gutierrez@gmail.com> wrote:
>
>> Hi everyone,
>>
>> My slave machine (cloud15) the datanode shows this log. It doesn't
>> connect to the master (cloud6).
>>
>> 2013-08-07 13:44:03,110 INFO org.apache.hadoop.ipc.Client: Retrying
>> connect to server: cloud15/192.168.188.15:54310. Already tried 9
>> time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10,
>> sleepTime=1 SECONDS)
>> 2013-08-07 13:44:03,110 INFO org.apache.hadoop.ipc.RPC: Server at cloud15/
>> 192.168.188.15:54310 not available yet, Zzzzz...
>>
>> But when I type jps command on slave machine DataNode is running. This is
>> my file core-site.xml in slave machine (cloud15):
>> <configuration>
>> <property>
>> <name>hadoop.tmp.dir</name>
>> <value>/app/hadoop/tmp</value>
>> <description>A base for other temporary directories.</description>
>> </property>
>>
>> <property>
>> <name>fs.default.name</name>
>> <value>hdfs://cloud15:54310</value>
>> <description>The name of the default file system. A URI whose
>> scheme and authority determine the FileSystem implementation. The
>> uri's scheme determines the config property (fs.SCHEME.impl) naming
>> the FileSystem implementation class. The uri's authority is used to
>> determine the host, port, etc. for a filesystem.</description>
>> </property>
>> </configuration>
>>
>> In the master machine I just swap cloud15 to cloud6.
>> In the file /etc/host I have (192.168.188.15 cloud15) and (192.168.188.6
>> cloud6) lines, and both machines access through ssh with out password.
>>
>> Am I missing anything?
>>
>> Thanks in advance!
>> Felipe
>>
>>
>> --
>> *--
>> -- Felipe Oliveira Gutierrez
>> -- Felipe.o.Gutierrez@gmail.com
>> -- https://sites.google.com/site/lipe82/Home/diaadia*
>>
>
>
--
*--
-- Felipe Oliveira Gutierrez
-- Felipe.o.Gutierrez@gmail.com
-- https://sites.google.com/site/lipe82/Home/diaadia*
Re: Datanode doesn't connect to Namenode
Posted by Sivaram RL <si...@gmail.com>.
Hi ,
your configuration of Datanode shows
<name>fs.default.name</name>
<value>hdfs://cloud15:54310</value>
But you have said Namenode is configured on master (cloud6). Can you check
the configuration again ?
Regards,
Sivaram R L
On Wed, Aug 7, 2013 at 10:29 PM, Felipe Gutierrez <
felipe.o.gutierrez@gmail.com> wrote:
> Hi everyone,
>
> My slave machine (cloud15) the datanode shows this log. It doesn't connect
> to the master (cloud6).
>
> 2013-08-07 13:44:03,110 INFO org.apache.hadoop.ipc.Client: Retrying
> connect to server: cloud15/192.168.188.15:54310. Already tried 9 time(s);
> retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10,
> sleepTime=1 SECONDS)
> 2013-08-07 13:44:03,110 INFO org.apache.hadoop.ipc.RPC: Server at cloud15/
> 192.168.188.15:54310 not available yet, Zzzzz...
>
> But when I type jps command on slave machine DataNode is running. This is
> my file core-site.xml in slave machine (cloud15):
> <configuration>
> <property>
> <name>hadoop.tmp.dir</name>
> <value>/app/hadoop/tmp</value>
> <description>A base for other temporary directories.</description>
> </property>
>
> <property>
> <name>fs.default.name</name>
> <value>hdfs://cloud15:54310</value>
> <description>The name of the default file system. A URI whose
> scheme and authority determine the FileSystem implementation. The
> uri's scheme determines the config property (fs.SCHEME.impl) naming
> the FileSystem implementation class. The uri's authority is used to
> determine the host, port, etc. for a filesystem.</description>
> </property>
> </configuration>
>
> In the master machine I just swap cloud15 to cloud6.
> In the file /etc/host I have (192.168.188.15 cloud15) and (192.168.188.6
> cloud6) lines, and both machines access through ssh with out password.
>
> Am I missing anything?
>
> Thanks in advance!
> Felipe
>
>
> --
> *--
> -- Felipe Oliveira Gutierrez
> -- Felipe.o.Gutierrez@gmail.com
> -- https://sites.google.com/site/lipe82/Home/diaadia*
>
Re: Datanode doesn't connect to Namenode
Posted by Jitendra Yadav <je...@gmail.com>.
Hi,
Your logs showing that the process is creating IPC call not for namenode,
it is hitting datanode itself.
Check you please check you datanode processes status?.
Regards
Jitendra
On Wed, Aug 7, 2013 at 10:29 PM, Felipe Gutierrez <
felipe.o.gutierrez@gmail.com> wrote:
> Hi everyone,
>
> My slave machine (cloud15) the datanode shows this log. It doesn't connect
> to the master (cloud6).
>
> 2013-08-07 13:44:03,110 INFO org.apache.hadoop.ipc.Client: Retrying
> connect to server: cloud15/192.168.188.15:54310. Already tried 9 time(s);
> retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10,
> sleepTime=1 SECONDS)
> 2013-08-07 13:44:03,110 INFO org.apache.hadoop.ipc.RPC: Server at cloud15/
> 192.168.188.15:54310 not available yet, Zzzzz...
>
> But when I type jps command on slave machine DataNode is running. This is
> my file core-site.xml in slave machine (cloud15):
> <configuration>
> <property>
> <name>hadoop.tmp.dir</name>
> <value>/app/hadoop/tmp</value>
> <description>A base for other temporary directories.</description>
> </property>
>
> <property>
> <name>fs.default.name</name>
> <value>hdfs://cloud15:54310</value>
> <description>The name of the default file system. A URI whose
> scheme and authority determine the FileSystem implementation. The
> uri's scheme determines the config property (fs.SCHEME.impl) naming
> the FileSystem implementation class. The uri's authority is used to
> determine the host, port, etc. for a filesystem.</description>
> </property>
> </configuration>
>
> In the master machine I just swap cloud15 to cloud6.
> In the file /etc/host I have (192.168.188.15 cloud15) and (192.168.188.6
> cloud6) lines, and both machines access through ssh with out password.
>
> Am I missing anything?
>
> Thanks in advance!
> Felipe
>
>
> --
> *--
> -- Felipe Oliveira Gutierrez
> -- Felipe.o.Gutierrez@gmail.com
> -- https://sites.google.com/site/lipe82/Home/diaadia*
>
Re: Datanode doesn't connect to Namenode
Posted by Sivaram RL <si...@gmail.com>.
Hi ,
your configuration of Datanode shows
<name>fs.default.name</name>
<value>hdfs://cloud15:54310</value>
But you have said Namenode is configured on master (cloud6). Can you check
the configuration again ?
Regards,
Sivaram R L
On Wed, Aug 7, 2013 at 10:29 PM, Felipe Gutierrez <
felipe.o.gutierrez@gmail.com> wrote:
> Hi everyone,
>
> My slave machine (cloud15) the datanode shows this log. It doesn't connect
> to the master (cloud6).
>
> 2013-08-07 13:44:03,110 INFO org.apache.hadoop.ipc.Client: Retrying
> connect to server: cloud15/192.168.188.15:54310. Already tried 9 time(s);
> retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10,
> sleepTime=1 SECONDS)
> 2013-08-07 13:44:03,110 INFO org.apache.hadoop.ipc.RPC: Server at cloud15/
> 192.168.188.15:54310 not available yet, Zzzzz...
>
> But when I type jps command on slave machine DataNode is running. This is
> my file core-site.xml in slave machine (cloud15):
> <configuration>
> <property>
> <name>hadoop.tmp.dir</name>
> <value>/app/hadoop/tmp</value>
> <description>A base for other temporary directories.</description>
> </property>
>
> <property>
> <name>fs.default.name</name>
> <value>hdfs://cloud15:54310</value>
> <description>The name of the default file system. A URI whose
> scheme and authority determine the FileSystem implementation. The
> uri's scheme determines the config property (fs.SCHEME.impl) naming
> the FileSystem implementation class. The uri's authority is used to
> determine the host, port, etc. for a filesystem.</description>
> </property>
> </configuration>
>
> In the master machine I just swap cloud15 to cloud6.
> In the file /etc/host I have (192.168.188.15 cloud15) and (192.168.188.6
> cloud6) lines, and both machines access through ssh with out password.
>
> Am I missing anything?
>
> Thanks in advance!
> Felipe
>
>
> --
> *--
> -- Felipe Oliveira Gutierrez
> -- Felipe.o.Gutierrez@gmail.com
> -- https://sites.google.com/site/lipe82/Home/diaadia*
>
Re: Datanode doesn't connect to Namenode
Posted by Sivaram RL <si...@gmail.com>.
Hi ,
your configuration of Datanode shows
<name>fs.default.name</name>
<value>hdfs://cloud15:54310</value>
But you have said Namenode is configured on master (cloud6). Can you check
the configuration again ?
Regards,
Sivaram R L
On Wed, Aug 7, 2013 at 10:29 PM, Felipe Gutierrez <
felipe.o.gutierrez@gmail.com> wrote:
> Hi everyone,
>
> My slave machine (cloud15) the datanode shows this log. It doesn't connect
> to the master (cloud6).
>
> 2013-08-07 13:44:03,110 INFO org.apache.hadoop.ipc.Client: Retrying
> connect to server: cloud15/192.168.188.15:54310. Already tried 9 time(s);
> retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10,
> sleepTime=1 SECONDS)
> 2013-08-07 13:44:03,110 INFO org.apache.hadoop.ipc.RPC: Server at cloud15/
> 192.168.188.15:54310 not available yet, Zzzzz...
>
> But when I type jps command on slave machine DataNode is running. This is
> my file core-site.xml in slave machine (cloud15):
> <configuration>
> <property>
> <name>hadoop.tmp.dir</name>
> <value>/app/hadoop/tmp</value>
> <description>A base for other temporary directories.</description>
> </property>
>
> <property>
> <name>fs.default.name</name>
> <value>hdfs://cloud15:54310</value>
> <description>The name of the default file system. A URI whose
> scheme and authority determine the FileSystem implementation. The
> uri's scheme determines the config property (fs.SCHEME.impl) naming
> the FileSystem implementation class. The uri's authority is used to
> determine the host, port, etc. for a filesystem.</description>
> </property>
> </configuration>
>
> In the master machine I just swap cloud15 to cloud6.
> In the file /etc/host I have (192.168.188.15 cloud15) and (192.168.188.6
> cloud6) lines, and both machines access through ssh with out password.
>
> Am I missing anything?
>
> Thanks in advance!
> Felipe
>
>
> --
> *--
> -- Felipe Oliveira Gutierrez
> -- Felipe.o.Gutierrez@gmail.com
> -- https://sites.google.com/site/lipe82/Home/diaadia*
>
Re: Datanode doesn't connect to Namenode
Posted by Jitendra Yadav <je...@gmail.com>.
Hi,
Your logs showing that the process is creating IPC call not for namenode,
it is hitting datanode itself.
Check you please check you datanode processes status?.
Regards
Jitendra
On Wed, Aug 7, 2013 at 10:29 PM, Felipe Gutierrez <
felipe.o.gutierrez@gmail.com> wrote:
> Hi everyone,
>
> My slave machine (cloud15) the datanode shows this log. It doesn't connect
> to the master (cloud6).
>
> 2013-08-07 13:44:03,110 INFO org.apache.hadoop.ipc.Client: Retrying
> connect to server: cloud15/192.168.188.15:54310. Already tried 9 time(s);
> retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10,
> sleepTime=1 SECONDS)
> 2013-08-07 13:44:03,110 INFO org.apache.hadoop.ipc.RPC: Server at cloud15/
> 192.168.188.15:54310 not available yet, Zzzzz...
>
> But when I type jps command on slave machine DataNode is running. This is
> my file core-site.xml in slave machine (cloud15):
> <configuration>
> <property>
> <name>hadoop.tmp.dir</name>
> <value>/app/hadoop/tmp</value>
> <description>A base for other temporary directories.</description>
> </property>
>
> <property>
> <name>fs.default.name</name>
> <value>hdfs://cloud15:54310</value>
> <description>The name of the default file system. A URI whose
> scheme and authority determine the FileSystem implementation. The
> uri's scheme determines the config property (fs.SCHEME.impl) naming
> the FileSystem implementation class. The uri's authority is used to
> determine the host, port, etc. for a filesystem.</description>
> </property>
> </configuration>
>
> In the master machine I just swap cloud15 to cloud6.
> In the file /etc/host I have (192.168.188.15 cloud15) and (192.168.188.6
> cloud6) lines, and both machines access through ssh with out password.
>
> Am I missing anything?
>
> Thanks in advance!
> Felipe
>
>
> --
> *--
> -- Felipe Oliveira Gutierrez
> -- Felipe.o.Gutierrez@gmail.com
> -- https://sites.google.com/site/lipe82/Home/diaadia*
>
Re: Datanode doesn't connect to Namenode
Posted by Sivaram RL <si...@gmail.com>.
Hi ,
your configuration of Datanode shows
<name>fs.default.name</name>
<value>hdfs://cloud15:54310</value>
But you have said Namenode is configured on master (cloud6). Can you check
the configuration again ?
Regards,
Sivaram R L
On Wed, Aug 7, 2013 at 10:29 PM, Felipe Gutierrez <
felipe.o.gutierrez@gmail.com> wrote:
> Hi everyone,
>
> My slave machine (cloud15) the datanode shows this log. It doesn't connect
> to the master (cloud6).
>
> 2013-08-07 13:44:03,110 INFO org.apache.hadoop.ipc.Client: Retrying
> connect to server: cloud15/192.168.188.15:54310. Already tried 9 time(s);
> retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10,
> sleepTime=1 SECONDS)
> 2013-08-07 13:44:03,110 INFO org.apache.hadoop.ipc.RPC: Server at cloud15/
> 192.168.188.15:54310 not available yet, Zzzzz...
>
> But when I type jps command on slave machine DataNode is running. This is
> my file core-site.xml in slave machine (cloud15):
> <configuration>
> <property>
> <name>hadoop.tmp.dir</name>
> <value>/app/hadoop/tmp</value>
> <description>A base for other temporary directories.</description>
> </property>
>
> <property>
> <name>fs.default.name</name>
> <value>hdfs://cloud15:54310</value>
> <description>The name of the default file system. A URI whose
> scheme and authority determine the FileSystem implementation. The
> uri's scheme determines the config property (fs.SCHEME.impl) naming
> the FileSystem implementation class. The uri's authority is used to
> determine the host, port, etc. for a filesystem.</description>
> </property>
> </configuration>
>
> In the master machine I just swap cloud15 to cloud6.
> In the file /etc/host I have (192.168.188.15 cloud15) and (192.168.188.6
> cloud6) lines, and both machines access through ssh with out password.
>
> Am I missing anything?
>
> Thanks in advance!
> Felipe
>
>
> --
> *--
> -- Felipe Oliveira Gutierrez
> -- Felipe.o.Gutierrez@gmail.com
> -- https://sites.google.com/site/lipe82/Home/diaadia*
>
Re: Datanode doesn't connect to Namenode
Posted by Jitendra Yadav <je...@gmail.com>.
Hi,
Your logs showing that the process is creating IPC call not for namenode,
it is hitting datanode itself.
Check you please check you datanode processes status?.
Regards
Jitendra
On Wed, Aug 7, 2013 at 10:29 PM, Felipe Gutierrez <
felipe.o.gutierrez@gmail.com> wrote:
> Hi everyone,
>
> My slave machine (cloud15) the datanode shows this log. It doesn't connect
> to the master (cloud6).
>
> 2013-08-07 13:44:03,110 INFO org.apache.hadoop.ipc.Client: Retrying
> connect to server: cloud15/192.168.188.15:54310. Already tried 9 time(s);
> retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10,
> sleepTime=1 SECONDS)
> 2013-08-07 13:44:03,110 INFO org.apache.hadoop.ipc.RPC: Server at cloud15/
> 192.168.188.15:54310 not available yet, Zzzzz...
>
> But when I type jps command on slave machine DataNode is running. This is
> my file core-site.xml in slave machine (cloud15):
> <configuration>
> <property>
> <name>hadoop.tmp.dir</name>
> <value>/app/hadoop/tmp</value>
> <description>A base for other temporary directories.</description>
> </property>
>
> <property>
> <name>fs.default.name</name>
> <value>hdfs://cloud15:54310</value>
> <description>The name of the default file system. A URI whose
> scheme and authority determine the FileSystem implementation. The
> uri's scheme determines the config property (fs.SCHEME.impl) naming
> the FileSystem implementation class. The uri's authority is used to
> determine the host, port, etc. for a filesystem.</description>
> </property>
> </configuration>
>
> In the master machine I just swap cloud15 to cloud6.
> In the file /etc/host I have (192.168.188.15 cloud15) and (192.168.188.6
> cloud6) lines, and both machines access through ssh with out password.
>
> Am I missing anything?
>
> Thanks in advance!
> Felipe
>
>
> --
> *--
> -- Felipe Oliveira Gutierrez
> -- Felipe.o.Gutierrez@gmail.com
> -- https://sites.google.com/site/lipe82/Home/diaadia*
>