You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-user@hadoop.apache.org by orahad bigdata <or...@gmail.com> on 2013/08/30 19:56:13 UTC

hadoop 2.0.5 datanode heartbeat issue

Hi All,

I'm using Hadoop 2.0.5 HA with QJM, After starting the cluster I did
some manual switch overs between NN.Then after I opened WEBUI page for
both the NN, I saw some strange situation where my DN connected to
standby NN but not sending the heartbeat to primary NameNode .

please guide.

Thanks

Re: hadoop 2.0.5 datanode heartbeat issue

Posted by Jitendra Yadav <je...@gmail.com>.
Hi,

You may had some problem during hdfs start-up which causes this issue.

Thank
Jitendra

On 8/31/13, orahad bigdata <or...@gmail.com> wrote:
> Thanks Jitendra,
>
> I have restarted my DataNode and suddenly it works for me :) now it's
> connected to both NN's.
>
> Do you know why this issue occurred?
>
> Thanks
>
>
>
> On Sat, Aug 31, 2013 at 1:24 AM, Jitendra Yadav
> <je...@gmail.com>wrote:
>
>> Hi,
>>
>> However your conf looks fine but I would say that you should  restart
>> your DN once and check your NN weburl.
>>
>> Regards
>> Jitendra
>>
>> On 8/31/13, orahad bigdata <or...@gmail.com> wrote:
>> > here is my conf files.
>> >
>> > -----------core-site.xml-----------
>> > <configuration>
>> > <property>
>> >   <name>fs.defaultFS</name>
>> >   <value>hdfs://orahadoop</value>
>> > </property>
>> > <property>
>> >   <name>dfs.journalnode.edits.dir</name>
>> >   <value>/u0/journal/node/local/data</value>
>> > </property>
>> > </configuration>
>> >
>> > ------------ hdfs-site.xml-------------
>> > <configuration>
>> > <property>
>> >   <name>dfs.nameservices</name>
>> >   <value>orahadoop</value>
>> > </property>
>> > <property>
>> >   <name>dfs.ha.namenodes.orahadoop</name>
>> > <value>node1,node2</value>
>> > </property>
>> > <property>
>> >   <name>dfs.namenode.rpc-address.orahadoop.node1</name>
>> >   <value>clone1:8020</value>
>> > </property>
>> > <property>
>> >   <name>dfs.namenode.rpc-address.orahadoop.node2</name>
>> >   <value>clone2:8020</value>
>> > </property>
>> > <property>
>> >   <name>dfs.namenode.http-address.orahadoop.node1</name>
>> >   <value>clone1:50070</value>
>> > </property>
>> > <property>
>> >   <name>dfs.namenode.http-address.orahadoop.node2</name>
>> >   <value>clone2:50070</value>
>> > </property>
>> > <property>
>> >   <name>dfs.namenode.shared.edits.dir</name>
>> >
>> > <value>qjournal://clone3:8485;clone1:8485;clone2:8485/orahadoop</value>
>> > </property>
>> > <property>
>> >   <name>dfs.client.failover.proxy.provider.orahadoop</name>
>> >
>> >
>> <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
>> > </property>
>> > </configuration>
>> >
>> > --------- mapred-site.xml -------------
>> >
>> > <configuration>
>> > <property>
>> >     <name>mapreduce.framework.name</name>
>> >     <value>classic</value>
>> >   </property>
>> > </configuration>
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> > On Sat, Aug 31, 2013 at 12:21 AM, Jing Zhao <ji...@hortonworks.com>
>> wrote:
>> >
>> >> Another possibility I can imagine is that the old configuration
>> >> property "fs.default.name" is still in your configuration with a
>> >> single NN's host+ip as its value. In that case this bad value may
>> >> overwrite the value of fs.defaultFS.
>> >>
>> >> It may be helpful if you can post your configurations.
>> >>
>> >> On Fri, Aug 30, 2013 at 11:32 AM, orahad bigdata <or...@gmail.com>
>> >> wrote:
>> >> > Thanks Jing,
>> >> >
>> >> > I'm using same configuration files at datanode side.
>> >> >
>> >> > dfs.nameservices -> orahadoop (hdfs-site.xml)
>> >> >
>> >> > fs.defaultFS -> hdfs://orahadoop (core-site.xml)
>> >> >
>> >> > Thanks
>> >> > On 8/30/13, Jing Zhao <ji...@hortonworks.com> wrote:
>> >> >> You may need to make sure the configuration of your DN has also
>> >> >> been
>> >> >> updated for HA. If your DN's configuration still uses the old URL
>> >> >> (e.g., one of your NN's host+port) for "fs.defaultFS", DN will only
>> >> >> connect to that NN.
>> >> >>
>> >> >> On Fri, Aug 30, 2013 at 10:56 AM, orahad bigdata <
>> oraclehad@gmail.com>
>> >> >> wrote:
>> >> >>> Hi All,
>> >> >>>
>> >> >>> I'm using Hadoop 2.0.5 HA with QJM, After starting the cluster I
>> >> >>> did
>> >> >>> some manual switch overs between NN.Then after I opened WEBUI page
>> >> >>> for
>> >> >>> both the NN, I saw some strange situation where my DN connected to
>> >> >>> standby NN but not sending the heartbeat to primary NameNode .
>> >> >>>
>> >> >>> please guide.
>> >> >>>
>> >> >>> Thanks
>> >> >>
>> >> >> --
>> >> >> CONFIDENTIALITY NOTICE
>> >> >> NOTICE: This message is intended for the use of the individual or
>> >> entity to
>> >> >>
>> >> >> which it is addressed and may contain information that is
>> >> >> confidential,
>> >> >> privileged and exempt from disclosure under applicable law. If the
>> >> reader
>> >> >> of this message is not the intended recipient, you are hereby
>> notified
>> >> that
>> >> >>
>> >> >> any printing, copying, dissemination, distribution, disclosure or
>> >> >> forwarding of this communication is strictly prohibited. If you
>> >> >> have
>> >> >> received this communication in error, please contact the sender
>> >> immediately
>> >> >>
>> >> >> and delete it from your system. Thank You.
>> >> >>
>> >>
>> >> --
>> >> CONFIDENTIALITY NOTICE
>> >> NOTICE: This message is intended for the use of the individual or
>> >> entity
>> >> to
>> >> which it is addressed and may contain information that is
>> >> confidential,
>> >> privileged and exempt from disclosure under applicable law. If the
>> reader
>> >> of this message is not the intended recipient, you are hereby notified
>> >> that
>> >> any printing, copying, dissemination, distribution, disclosure or
>> >> forwarding of this communication is strictly prohibited. If you have
>> >> received this communication in error, please contact the sender
>> >> immediately
>> >> and delete it from your system. Thank You.
>> >>
>> >
>>
>

Re: hadoop 2.0.5 datanode heartbeat issue

Posted by Jitendra Yadav <je...@gmail.com>.
Hi,

You may had some problem during hdfs start-up which causes this issue.

Thank
Jitendra

On 8/31/13, orahad bigdata <or...@gmail.com> wrote:
> Thanks Jitendra,
>
> I have restarted my DataNode and suddenly it works for me :) now it's
> connected to both NN's.
>
> Do you know why this issue occurred?
>
> Thanks
>
>
>
> On Sat, Aug 31, 2013 at 1:24 AM, Jitendra Yadav
> <je...@gmail.com>wrote:
>
>> Hi,
>>
>> However your conf looks fine but I would say that you should  restart
>> your DN once and check your NN weburl.
>>
>> Regards
>> Jitendra
>>
>> On 8/31/13, orahad bigdata <or...@gmail.com> wrote:
>> > here is my conf files.
>> >
>> > -----------core-site.xml-----------
>> > <configuration>
>> > <property>
>> >   <name>fs.defaultFS</name>
>> >   <value>hdfs://orahadoop</value>
>> > </property>
>> > <property>
>> >   <name>dfs.journalnode.edits.dir</name>
>> >   <value>/u0/journal/node/local/data</value>
>> > </property>
>> > </configuration>
>> >
>> > ------------ hdfs-site.xml-------------
>> > <configuration>
>> > <property>
>> >   <name>dfs.nameservices</name>
>> >   <value>orahadoop</value>
>> > </property>
>> > <property>
>> >   <name>dfs.ha.namenodes.orahadoop</name>
>> > <value>node1,node2</value>
>> > </property>
>> > <property>
>> >   <name>dfs.namenode.rpc-address.orahadoop.node1</name>
>> >   <value>clone1:8020</value>
>> > </property>
>> > <property>
>> >   <name>dfs.namenode.rpc-address.orahadoop.node2</name>
>> >   <value>clone2:8020</value>
>> > </property>
>> > <property>
>> >   <name>dfs.namenode.http-address.orahadoop.node1</name>
>> >   <value>clone1:50070</value>
>> > </property>
>> > <property>
>> >   <name>dfs.namenode.http-address.orahadoop.node2</name>
>> >   <value>clone2:50070</value>
>> > </property>
>> > <property>
>> >   <name>dfs.namenode.shared.edits.dir</name>
>> >
>> > <value>qjournal://clone3:8485;clone1:8485;clone2:8485/orahadoop</value>
>> > </property>
>> > <property>
>> >   <name>dfs.client.failover.proxy.provider.orahadoop</name>
>> >
>> >
>> <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
>> > </property>
>> > </configuration>
>> >
>> > --------- mapred-site.xml -------------
>> >
>> > <configuration>
>> > <property>
>> >     <name>mapreduce.framework.name</name>
>> >     <value>classic</value>
>> >   </property>
>> > </configuration>
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> > On Sat, Aug 31, 2013 at 12:21 AM, Jing Zhao <ji...@hortonworks.com>
>> wrote:
>> >
>> >> Another possibility I can imagine is that the old configuration
>> >> property "fs.default.name" is still in your configuration with a
>> >> single NN's host+ip as its value. In that case this bad value may
>> >> overwrite the value of fs.defaultFS.
>> >>
>> >> It may be helpful if you can post your configurations.
>> >>
>> >> On Fri, Aug 30, 2013 at 11:32 AM, orahad bigdata <or...@gmail.com>
>> >> wrote:
>> >> > Thanks Jing,
>> >> >
>> >> > I'm using same configuration files at datanode side.
>> >> >
>> >> > dfs.nameservices -> orahadoop (hdfs-site.xml)
>> >> >
>> >> > fs.defaultFS -> hdfs://orahadoop (core-site.xml)
>> >> >
>> >> > Thanks
>> >> > On 8/30/13, Jing Zhao <ji...@hortonworks.com> wrote:
>> >> >> You may need to make sure the configuration of your DN has also
>> >> >> been
>> >> >> updated for HA. If your DN's configuration still uses the old URL
>> >> >> (e.g., one of your NN's host+port) for "fs.defaultFS", DN will only
>> >> >> connect to that NN.
>> >> >>
>> >> >> On Fri, Aug 30, 2013 at 10:56 AM, orahad bigdata <
>> oraclehad@gmail.com>
>> >> >> wrote:
>> >> >>> Hi All,
>> >> >>>
>> >> >>> I'm using Hadoop 2.0.5 HA with QJM, After starting the cluster I
>> >> >>> did
>> >> >>> some manual switch overs between NN.Then after I opened WEBUI page
>> >> >>> for
>> >> >>> both the NN, I saw some strange situation where my DN connected to
>> >> >>> standby NN but not sending the heartbeat to primary NameNode .
>> >> >>>
>> >> >>> please guide.
>> >> >>>
>> >> >>> Thanks
>> >> >>
>> >> >> --
>> >> >> CONFIDENTIALITY NOTICE
>> >> >> NOTICE: This message is intended for the use of the individual or
>> >> entity to
>> >> >>
>> >> >> which it is addressed and may contain information that is
>> >> >> confidential,
>> >> >> privileged and exempt from disclosure under applicable law. If the
>> >> reader
>> >> >> of this message is not the intended recipient, you are hereby
>> notified
>> >> that
>> >> >>
>> >> >> any printing, copying, dissemination, distribution, disclosure or
>> >> >> forwarding of this communication is strictly prohibited. If you
>> >> >> have
>> >> >> received this communication in error, please contact the sender
>> >> immediately
>> >> >>
>> >> >> and delete it from your system. Thank You.
>> >> >>
>> >>
>> >> --
>> >> CONFIDENTIALITY NOTICE
>> >> NOTICE: This message is intended for the use of the individual or
>> >> entity
>> >> to
>> >> which it is addressed and may contain information that is
>> >> confidential,
>> >> privileged and exempt from disclosure under applicable law. If the
>> reader
>> >> of this message is not the intended recipient, you are hereby notified
>> >> that
>> >> any printing, copying, dissemination, distribution, disclosure or
>> >> forwarding of this communication is strictly prohibited. If you have
>> >> received this communication in error, please contact the sender
>> >> immediately
>> >> and delete it from your system. Thank You.
>> >>
>> >
>>
>

Re: hadoop 2.0.5 datanode heartbeat issue

Posted by Jitendra Yadav <je...@gmail.com>.
Hi,

You may had some problem during hdfs start-up which causes this issue.

Thank
Jitendra

On 8/31/13, orahad bigdata <or...@gmail.com> wrote:
> Thanks Jitendra,
>
> I have restarted my DataNode and suddenly it works for me :) now it's
> connected to both NN's.
>
> Do you know why this issue occurred?
>
> Thanks
>
>
>
> On Sat, Aug 31, 2013 at 1:24 AM, Jitendra Yadav
> <je...@gmail.com>wrote:
>
>> Hi,
>>
>> However your conf looks fine but I would say that you should  restart
>> your DN once and check your NN weburl.
>>
>> Regards
>> Jitendra
>>
>> On 8/31/13, orahad bigdata <or...@gmail.com> wrote:
>> > here is my conf files.
>> >
>> > -----------core-site.xml-----------
>> > <configuration>
>> > <property>
>> >   <name>fs.defaultFS</name>
>> >   <value>hdfs://orahadoop</value>
>> > </property>
>> > <property>
>> >   <name>dfs.journalnode.edits.dir</name>
>> >   <value>/u0/journal/node/local/data</value>
>> > </property>
>> > </configuration>
>> >
>> > ------------ hdfs-site.xml-------------
>> > <configuration>
>> > <property>
>> >   <name>dfs.nameservices</name>
>> >   <value>orahadoop</value>
>> > </property>
>> > <property>
>> >   <name>dfs.ha.namenodes.orahadoop</name>
>> > <value>node1,node2</value>
>> > </property>
>> > <property>
>> >   <name>dfs.namenode.rpc-address.orahadoop.node1</name>
>> >   <value>clone1:8020</value>
>> > </property>
>> > <property>
>> >   <name>dfs.namenode.rpc-address.orahadoop.node2</name>
>> >   <value>clone2:8020</value>
>> > </property>
>> > <property>
>> >   <name>dfs.namenode.http-address.orahadoop.node1</name>
>> >   <value>clone1:50070</value>
>> > </property>
>> > <property>
>> >   <name>dfs.namenode.http-address.orahadoop.node2</name>
>> >   <value>clone2:50070</value>
>> > </property>
>> > <property>
>> >   <name>dfs.namenode.shared.edits.dir</name>
>> >
>> > <value>qjournal://clone3:8485;clone1:8485;clone2:8485/orahadoop</value>
>> > </property>
>> > <property>
>> >   <name>dfs.client.failover.proxy.provider.orahadoop</name>
>> >
>> >
>> <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
>> > </property>
>> > </configuration>
>> >
>> > --------- mapred-site.xml -------------
>> >
>> > <configuration>
>> > <property>
>> >     <name>mapreduce.framework.name</name>
>> >     <value>classic</value>
>> >   </property>
>> > </configuration>
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> > On Sat, Aug 31, 2013 at 12:21 AM, Jing Zhao <ji...@hortonworks.com>
>> wrote:
>> >
>> >> Another possibility I can imagine is that the old configuration
>> >> property "fs.default.name" is still in your configuration with a
>> >> single NN's host+ip as its value. In that case this bad value may
>> >> overwrite the value of fs.defaultFS.
>> >>
>> >> It may be helpful if you can post your configurations.
>> >>
>> >> On Fri, Aug 30, 2013 at 11:32 AM, orahad bigdata <or...@gmail.com>
>> >> wrote:
>> >> > Thanks Jing,
>> >> >
>> >> > I'm using same configuration files at datanode side.
>> >> >
>> >> > dfs.nameservices -> orahadoop (hdfs-site.xml)
>> >> >
>> >> > fs.defaultFS -> hdfs://orahadoop (core-site.xml)
>> >> >
>> >> > Thanks
>> >> > On 8/30/13, Jing Zhao <ji...@hortonworks.com> wrote:
>> >> >> You may need to make sure the configuration of your DN has also
>> >> >> been
>> >> >> updated for HA. If your DN's configuration still uses the old URL
>> >> >> (e.g., one of your NN's host+port) for "fs.defaultFS", DN will only
>> >> >> connect to that NN.
>> >> >>
>> >> >> On Fri, Aug 30, 2013 at 10:56 AM, orahad bigdata <
>> oraclehad@gmail.com>
>> >> >> wrote:
>> >> >>> Hi All,
>> >> >>>
>> >> >>> I'm using Hadoop 2.0.5 HA with QJM, After starting the cluster I
>> >> >>> did
>> >> >>> some manual switch overs between NN.Then after I opened WEBUI page
>> >> >>> for
>> >> >>> both the NN, I saw some strange situation where my DN connected to
>> >> >>> standby NN but not sending the heartbeat to primary NameNode .
>> >> >>>
>> >> >>> please guide.
>> >> >>>
>> >> >>> Thanks
>> >> >>
>> >> >> --
>> >> >> CONFIDENTIALITY NOTICE
>> >> >> NOTICE: This message is intended for the use of the individual or
>> >> entity to
>> >> >>
>> >> >> which it is addressed and may contain information that is
>> >> >> confidential,
>> >> >> privileged and exempt from disclosure under applicable law. If the
>> >> reader
>> >> >> of this message is not the intended recipient, you are hereby
>> notified
>> >> that
>> >> >>
>> >> >> any printing, copying, dissemination, distribution, disclosure or
>> >> >> forwarding of this communication is strictly prohibited. If you
>> >> >> have
>> >> >> received this communication in error, please contact the sender
>> >> immediately
>> >> >>
>> >> >> and delete it from your system. Thank You.
>> >> >>
>> >>
>> >> --
>> >> CONFIDENTIALITY NOTICE
>> >> NOTICE: This message is intended for the use of the individual or
>> >> entity
>> >> to
>> >> which it is addressed and may contain information that is
>> >> confidential,
>> >> privileged and exempt from disclosure under applicable law. If the
>> reader
>> >> of this message is not the intended recipient, you are hereby notified
>> >> that
>> >> any printing, copying, dissemination, distribution, disclosure or
>> >> forwarding of this communication is strictly prohibited. If you have
>> >> received this communication in error, please contact the sender
>> >> immediately
>> >> and delete it from your system. Thank You.
>> >>
>> >
>>
>

Re: hadoop 2.0.5 datanode heartbeat issue

Posted by Jitendra Yadav <je...@gmail.com>.
Hi,

You may had some problem during hdfs start-up which causes this issue.

Thank
Jitendra

On 8/31/13, orahad bigdata <or...@gmail.com> wrote:
> Thanks Jitendra,
>
> I have restarted my DataNode and suddenly it works for me :) now it's
> connected to both NN's.
>
> Do you know why this issue occurred?
>
> Thanks
>
>
>
> On Sat, Aug 31, 2013 at 1:24 AM, Jitendra Yadav
> <je...@gmail.com>wrote:
>
>> Hi,
>>
>> However your conf looks fine but I would say that you should  restart
>> your DN once and check your NN weburl.
>>
>> Regards
>> Jitendra
>>
>> On 8/31/13, orahad bigdata <or...@gmail.com> wrote:
>> > here is my conf files.
>> >
>> > -----------core-site.xml-----------
>> > <configuration>
>> > <property>
>> >   <name>fs.defaultFS</name>
>> >   <value>hdfs://orahadoop</value>
>> > </property>
>> > <property>
>> >   <name>dfs.journalnode.edits.dir</name>
>> >   <value>/u0/journal/node/local/data</value>
>> > </property>
>> > </configuration>
>> >
>> > ------------ hdfs-site.xml-------------
>> > <configuration>
>> > <property>
>> >   <name>dfs.nameservices</name>
>> >   <value>orahadoop</value>
>> > </property>
>> > <property>
>> >   <name>dfs.ha.namenodes.orahadoop</name>
>> > <value>node1,node2</value>
>> > </property>
>> > <property>
>> >   <name>dfs.namenode.rpc-address.orahadoop.node1</name>
>> >   <value>clone1:8020</value>
>> > </property>
>> > <property>
>> >   <name>dfs.namenode.rpc-address.orahadoop.node2</name>
>> >   <value>clone2:8020</value>
>> > </property>
>> > <property>
>> >   <name>dfs.namenode.http-address.orahadoop.node1</name>
>> >   <value>clone1:50070</value>
>> > </property>
>> > <property>
>> >   <name>dfs.namenode.http-address.orahadoop.node2</name>
>> >   <value>clone2:50070</value>
>> > </property>
>> > <property>
>> >   <name>dfs.namenode.shared.edits.dir</name>
>> >
>> > <value>qjournal://clone3:8485;clone1:8485;clone2:8485/orahadoop</value>
>> > </property>
>> > <property>
>> >   <name>dfs.client.failover.proxy.provider.orahadoop</name>
>> >
>> >
>> <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
>> > </property>
>> > </configuration>
>> >
>> > --------- mapred-site.xml -------------
>> >
>> > <configuration>
>> > <property>
>> >     <name>mapreduce.framework.name</name>
>> >     <value>classic</value>
>> >   </property>
>> > </configuration>
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> > On Sat, Aug 31, 2013 at 12:21 AM, Jing Zhao <ji...@hortonworks.com>
>> wrote:
>> >
>> >> Another possibility I can imagine is that the old configuration
>> >> property "fs.default.name" is still in your configuration with a
>> >> single NN's host+ip as its value. In that case this bad value may
>> >> overwrite the value of fs.defaultFS.
>> >>
>> >> It may be helpful if you can post your configurations.
>> >>
>> >> On Fri, Aug 30, 2013 at 11:32 AM, orahad bigdata <or...@gmail.com>
>> >> wrote:
>> >> > Thanks Jing,
>> >> >
>> >> > I'm using same configuration files at datanode side.
>> >> >
>> >> > dfs.nameservices -> orahadoop (hdfs-site.xml)
>> >> >
>> >> > fs.defaultFS -> hdfs://orahadoop (core-site.xml)
>> >> >
>> >> > Thanks
>> >> > On 8/30/13, Jing Zhao <ji...@hortonworks.com> wrote:
>> >> >> You may need to make sure the configuration of your DN has also
>> >> >> been
>> >> >> updated for HA. If your DN's configuration still uses the old URL
>> >> >> (e.g., one of your NN's host+port) for "fs.defaultFS", DN will only
>> >> >> connect to that NN.
>> >> >>
>> >> >> On Fri, Aug 30, 2013 at 10:56 AM, orahad bigdata <
>> oraclehad@gmail.com>
>> >> >> wrote:
>> >> >>> Hi All,
>> >> >>>
>> >> >>> I'm using Hadoop 2.0.5 HA with QJM, After starting the cluster I
>> >> >>> did
>> >> >>> some manual switch overs between NN.Then after I opened WEBUI page
>> >> >>> for
>> >> >>> both the NN, I saw some strange situation where my DN connected to
>> >> >>> standby NN but not sending the heartbeat to primary NameNode .
>> >> >>>
>> >> >>> please guide.
>> >> >>>
>> >> >>> Thanks
>> >> >>
>> >> >> --
>> >> >> CONFIDENTIALITY NOTICE
>> >> >> NOTICE: This message is intended for the use of the individual or
>> >> entity to
>> >> >>
>> >> >> which it is addressed and may contain information that is
>> >> >> confidential,
>> >> >> privileged and exempt from disclosure under applicable law. If the
>> >> reader
>> >> >> of this message is not the intended recipient, you are hereby
>> notified
>> >> that
>> >> >>
>> >> >> any printing, copying, dissemination, distribution, disclosure or
>> >> >> forwarding of this communication is strictly prohibited. If you
>> >> >> have
>> >> >> received this communication in error, please contact the sender
>> >> immediately
>> >> >>
>> >> >> and delete it from your system. Thank You.
>> >> >>
>> >>
>> >> --
>> >> CONFIDENTIALITY NOTICE
>> >> NOTICE: This message is intended for the use of the individual or
>> >> entity
>> >> to
>> >> which it is addressed and may contain information that is
>> >> confidential,
>> >> privileged and exempt from disclosure under applicable law. If the
>> reader
>> >> of this message is not the intended recipient, you are hereby notified
>> >> that
>> >> any printing, copying, dissemination, distribution, disclosure or
>> >> forwarding of this communication is strictly prohibited. If you have
>> >> received this communication in error, please contact the sender
>> >> immediately
>> >> and delete it from your system. Thank You.
>> >>
>> >
>>
>

Re: hadoop 2.0.5 datanode heartbeat issue

Posted by orahad bigdata <or...@gmail.com>.
Thanks Jitendra,

I have restarted my DataNode and suddenly it works for me :) now it's
connected to both NN's.

Do you know why this issue occurred?

Thanks



On Sat, Aug 31, 2013 at 1:24 AM, Jitendra Yadav
<je...@gmail.com>wrote:

> Hi,
>
> However your conf looks fine but I would say that you should  restart
> your DN once and check your NN weburl.
>
> Regards
> Jitendra
>
> On 8/31/13, orahad bigdata <or...@gmail.com> wrote:
> > here is my conf files.
> >
> > -----------core-site.xml-----------
> > <configuration>
> > <property>
> >   <name>fs.defaultFS</name>
> >   <value>hdfs://orahadoop</value>
> > </property>
> > <property>
> >   <name>dfs.journalnode.edits.dir</name>
> >   <value>/u0/journal/node/local/data</value>
> > </property>
> > </configuration>
> >
> > ------------ hdfs-site.xml-------------
> > <configuration>
> > <property>
> >   <name>dfs.nameservices</name>
> >   <value>orahadoop</value>
> > </property>
> > <property>
> >   <name>dfs.ha.namenodes.orahadoop</name>
> > <value>node1,node2</value>
> > </property>
> > <property>
> >   <name>dfs.namenode.rpc-address.orahadoop.node1</name>
> >   <value>clone1:8020</value>
> > </property>
> > <property>
> >   <name>dfs.namenode.rpc-address.orahadoop.node2</name>
> >   <value>clone2:8020</value>
> > </property>
> > <property>
> >   <name>dfs.namenode.http-address.orahadoop.node1</name>
> >   <value>clone1:50070</value>
> > </property>
> > <property>
> >   <name>dfs.namenode.http-address.orahadoop.node2</name>
> >   <value>clone2:50070</value>
> > </property>
> > <property>
> >   <name>dfs.namenode.shared.edits.dir</name>
> >   <value>qjournal://clone3:8485;clone1:8485;clone2:8485/orahadoop</value>
> > </property>
> > <property>
> >   <name>dfs.client.failover.proxy.provider.orahadoop</name>
> >
> >
> <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
> > </property>
> > </configuration>
> >
> > --------- mapred-site.xml -------------
> >
> > <configuration>
> > <property>
> >     <name>mapreduce.framework.name</name>
> >     <value>classic</value>
> >   </property>
> > </configuration>
> >
> >
> >
> >
> >
> >
> >
> > On Sat, Aug 31, 2013 at 12:21 AM, Jing Zhao <ji...@hortonworks.com>
> wrote:
> >
> >> Another possibility I can imagine is that the old configuration
> >> property "fs.default.name" is still in your configuration with a
> >> single NN's host+ip as its value. In that case this bad value may
> >> overwrite the value of fs.defaultFS.
> >>
> >> It may be helpful if you can post your configurations.
> >>
> >> On Fri, Aug 30, 2013 at 11:32 AM, orahad bigdata <or...@gmail.com>
> >> wrote:
> >> > Thanks Jing,
> >> >
> >> > I'm using same configuration files at datanode side.
> >> >
> >> > dfs.nameservices -> orahadoop (hdfs-site.xml)
> >> >
> >> > fs.defaultFS -> hdfs://orahadoop (core-site.xml)
> >> >
> >> > Thanks
> >> > On 8/30/13, Jing Zhao <ji...@hortonworks.com> wrote:
> >> >> You may need to make sure the configuration of your DN has also been
> >> >> updated for HA. If your DN's configuration still uses the old URL
> >> >> (e.g., one of your NN's host+port) for "fs.defaultFS", DN will only
> >> >> connect to that NN.
> >> >>
> >> >> On Fri, Aug 30, 2013 at 10:56 AM, orahad bigdata <
> oraclehad@gmail.com>
> >> >> wrote:
> >> >>> Hi All,
> >> >>>
> >> >>> I'm using Hadoop 2.0.5 HA with QJM, After starting the cluster I did
> >> >>> some manual switch overs between NN.Then after I opened WEBUI page
> >> >>> for
> >> >>> both the NN, I saw some strange situation where my DN connected to
> >> >>> standby NN but not sending the heartbeat to primary NameNode .
> >> >>>
> >> >>> please guide.
> >> >>>
> >> >>> Thanks
> >> >>
> >> >> --
> >> >> CONFIDENTIALITY NOTICE
> >> >> NOTICE: This message is intended for the use of the individual or
> >> entity to
> >> >>
> >> >> which it is addressed and may contain information that is
> >> >> confidential,
> >> >> privileged and exempt from disclosure under applicable law. If the
> >> reader
> >> >> of this message is not the intended recipient, you are hereby
> notified
> >> that
> >> >>
> >> >> any printing, copying, dissemination, distribution, disclosure or
> >> >> forwarding of this communication is strictly prohibited. If you have
> >> >> received this communication in error, please contact the sender
> >> immediately
> >> >>
> >> >> and delete it from your system. Thank You.
> >> >>
> >>
> >> --
> >> CONFIDENTIALITY NOTICE
> >> NOTICE: This message is intended for the use of the individual or entity
> >> to
> >> which it is addressed and may contain information that is confidential,
> >> privileged and exempt from disclosure under applicable law. If the
> reader
> >> of this message is not the intended recipient, you are hereby notified
> >> that
> >> any printing, copying, dissemination, distribution, disclosure or
> >> forwarding of this communication is strictly prohibited. If you have
> >> received this communication in error, please contact the sender
> >> immediately
> >> and delete it from your system. Thank You.
> >>
> >
>

Re: hadoop 2.0.5 datanode heartbeat issue

Posted by orahad bigdata <or...@gmail.com>.
Thanks Jitendra,

I have restarted my DataNode and suddenly it works for me :) now it's
connected to both NN's.

Do you know why this issue occurred?

Thanks



On Sat, Aug 31, 2013 at 1:24 AM, Jitendra Yadav
<je...@gmail.com>wrote:

> Hi,
>
> However your conf looks fine but I would say that you should  restart
> your DN once and check your NN weburl.
>
> Regards
> Jitendra
>
> On 8/31/13, orahad bigdata <or...@gmail.com> wrote:
> > here is my conf files.
> >
> > -----------core-site.xml-----------
> > <configuration>
> > <property>
> >   <name>fs.defaultFS</name>
> >   <value>hdfs://orahadoop</value>
> > </property>
> > <property>
> >   <name>dfs.journalnode.edits.dir</name>
> >   <value>/u0/journal/node/local/data</value>
> > </property>
> > </configuration>
> >
> > ------------ hdfs-site.xml-------------
> > <configuration>
> > <property>
> >   <name>dfs.nameservices</name>
> >   <value>orahadoop</value>
> > </property>
> > <property>
> >   <name>dfs.ha.namenodes.orahadoop</name>
> > <value>node1,node2</value>
> > </property>
> > <property>
> >   <name>dfs.namenode.rpc-address.orahadoop.node1</name>
> >   <value>clone1:8020</value>
> > </property>
> > <property>
> >   <name>dfs.namenode.rpc-address.orahadoop.node2</name>
> >   <value>clone2:8020</value>
> > </property>
> > <property>
> >   <name>dfs.namenode.http-address.orahadoop.node1</name>
> >   <value>clone1:50070</value>
> > </property>
> > <property>
> >   <name>dfs.namenode.http-address.orahadoop.node2</name>
> >   <value>clone2:50070</value>
> > </property>
> > <property>
> >   <name>dfs.namenode.shared.edits.dir</name>
> >   <value>qjournal://clone3:8485;clone1:8485;clone2:8485/orahadoop</value>
> > </property>
> > <property>
> >   <name>dfs.client.failover.proxy.provider.orahadoop</name>
> >
> >
> <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
> > </property>
> > </configuration>
> >
> > --------- mapred-site.xml -------------
> >
> > <configuration>
> > <property>
> >     <name>mapreduce.framework.name</name>
> >     <value>classic</value>
> >   </property>
> > </configuration>
> >
> >
> >
> >
> >
> >
> >
> > On Sat, Aug 31, 2013 at 12:21 AM, Jing Zhao <ji...@hortonworks.com>
> wrote:
> >
> >> Another possibility I can imagine is that the old configuration
> >> property "fs.default.name" is still in your configuration with a
> >> single NN's host+ip as its value. In that case this bad value may
> >> overwrite the value of fs.defaultFS.
> >>
> >> It may be helpful if you can post your configurations.
> >>
> >> On Fri, Aug 30, 2013 at 11:32 AM, orahad bigdata <or...@gmail.com>
> >> wrote:
> >> > Thanks Jing,
> >> >
> >> > I'm using same configuration files at datanode side.
> >> >
> >> > dfs.nameservices -> orahadoop (hdfs-site.xml)
> >> >
> >> > fs.defaultFS -> hdfs://orahadoop (core-site.xml)
> >> >
> >> > Thanks
> >> > On 8/30/13, Jing Zhao <ji...@hortonworks.com> wrote:
> >> >> You may need to make sure the configuration of your DN has also been
> >> >> updated for HA. If your DN's configuration still uses the old URL
> >> >> (e.g., one of your NN's host+port) for "fs.defaultFS", DN will only
> >> >> connect to that NN.
> >> >>
> >> >> On Fri, Aug 30, 2013 at 10:56 AM, orahad bigdata <
> oraclehad@gmail.com>
> >> >> wrote:
> >> >>> Hi All,
> >> >>>
> >> >>> I'm using Hadoop 2.0.5 HA with QJM, After starting the cluster I did
> >> >>> some manual switch overs between NN.Then after I opened WEBUI page
> >> >>> for
> >> >>> both the NN, I saw some strange situation where my DN connected to
> >> >>> standby NN but not sending the heartbeat to primary NameNode .
> >> >>>
> >> >>> please guide.
> >> >>>
> >> >>> Thanks
> >> >>
> >> >> --
> >> >> CONFIDENTIALITY NOTICE
> >> >> NOTICE: This message is intended for the use of the individual or
> >> entity to
> >> >>
> >> >> which it is addressed and may contain information that is
> >> >> confidential,
> >> >> privileged and exempt from disclosure under applicable law. If the
> >> reader
> >> >> of this message is not the intended recipient, you are hereby
> notified
> >> that
> >> >>
> >> >> any printing, copying, dissemination, distribution, disclosure or
> >> >> forwarding of this communication is strictly prohibited. If you have
> >> >> received this communication in error, please contact the sender
> >> immediately
> >> >>
> >> >> and delete it from your system. Thank You.
> >> >>
> >>
> >> --
> >> CONFIDENTIALITY NOTICE
> >> NOTICE: This message is intended for the use of the individual or entity
> >> to
> >> which it is addressed and may contain information that is confidential,
> >> privileged and exempt from disclosure under applicable law. If the
> reader
> >> of this message is not the intended recipient, you are hereby notified
> >> that
> >> any printing, copying, dissemination, distribution, disclosure or
> >> forwarding of this communication is strictly prohibited. If you have
> >> received this communication in error, please contact the sender
> >> immediately
> >> and delete it from your system. Thank You.
> >>
> >
>

Re: hadoop 2.0.5 datanode heartbeat issue

Posted by orahad bigdata <or...@gmail.com>.
Thanks Jitendra,

I have restarted my DataNode and suddenly it works for me :) now it's
connected to both NN's.

Do you know why this issue occurred?

Thanks



On Sat, Aug 31, 2013 at 1:24 AM, Jitendra Yadav
<je...@gmail.com>wrote:

> Hi,
>
> However your conf looks fine but I would say that you should  restart
> your DN once and check your NN weburl.
>
> Regards
> Jitendra
>
> On 8/31/13, orahad bigdata <or...@gmail.com> wrote:
> > here is my conf files.
> >
> > -----------core-site.xml-----------
> > <configuration>
> > <property>
> >   <name>fs.defaultFS</name>
> >   <value>hdfs://orahadoop</value>
> > </property>
> > <property>
> >   <name>dfs.journalnode.edits.dir</name>
> >   <value>/u0/journal/node/local/data</value>
> > </property>
> > </configuration>
> >
> > ------------ hdfs-site.xml-------------
> > <configuration>
> > <property>
> >   <name>dfs.nameservices</name>
> >   <value>orahadoop</value>
> > </property>
> > <property>
> >   <name>dfs.ha.namenodes.orahadoop</name>
> > <value>node1,node2</value>
> > </property>
> > <property>
> >   <name>dfs.namenode.rpc-address.orahadoop.node1</name>
> >   <value>clone1:8020</value>
> > </property>
> > <property>
> >   <name>dfs.namenode.rpc-address.orahadoop.node2</name>
> >   <value>clone2:8020</value>
> > </property>
> > <property>
> >   <name>dfs.namenode.http-address.orahadoop.node1</name>
> >   <value>clone1:50070</value>
> > </property>
> > <property>
> >   <name>dfs.namenode.http-address.orahadoop.node2</name>
> >   <value>clone2:50070</value>
> > </property>
> > <property>
> >   <name>dfs.namenode.shared.edits.dir</name>
> >   <value>qjournal://clone3:8485;clone1:8485;clone2:8485/orahadoop</value>
> > </property>
> > <property>
> >   <name>dfs.client.failover.proxy.provider.orahadoop</name>
> >
> >
> <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
> > </property>
> > </configuration>
> >
> > --------- mapred-site.xml -------------
> >
> > <configuration>
> > <property>
> >     <name>mapreduce.framework.name</name>
> >     <value>classic</value>
> >   </property>
> > </configuration>
> >
> >
> >
> >
> >
> >
> >
> > On Sat, Aug 31, 2013 at 12:21 AM, Jing Zhao <ji...@hortonworks.com>
> wrote:
> >
> >> Another possibility I can imagine is that the old configuration
> >> property "fs.default.name" is still in your configuration with a
> >> single NN's host+ip as its value. In that case this bad value may
> >> overwrite the value of fs.defaultFS.
> >>
> >> It may be helpful if you can post your configurations.
> >>
> >> On Fri, Aug 30, 2013 at 11:32 AM, orahad bigdata <or...@gmail.com>
> >> wrote:
> >> > Thanks Jing,
> >> >
> >> > I'm using same configuration files at datanode side.
> >> >
> >> > dfs.nameservices -> orahadoop (hdfs-site.xml)
> >> >
> >> > fs.defaultFS -> hdfs://orahadoop (core-site.xml)
> >> >
> >> > Thanks
> >> > On 8/30/13, Jing Zhao <ji...@hortonworks.com> wrote:
> >> >> You may need to make sure the configuration of your DN has also been
> >> >> updated for HA. If your DN's configuration still uses the old URL
> >> >> (e.g., one of your NN's host+port) for "fs.defaultFS", DN will only
> >> >> connect to that NN.
> >> >>
> >> >> On Fri, Aug 30, 2013 at 10:56 AM, orahad bigdata <
> oraclehad@gmail.com>
> >> >> wrote:
> >> >>> Hi All,
> >> >>>
> >> >>> I'm using Hadoop 2.0.5 HA with QJM, After starting the cluster I did
> >> >>> some manual switch overs between NN.Then after I opened WEBUI page
> >> >>> for
> >> >>> both the NN, I saw some strange situation where my DN connected to
> >> >>> standby NN but not sending the heartbeat to primary NameNode .
> >> >>>
> >> >>> please guide.
> >> >>>
> >> >>> Thanks
> >> >>
> >> >> --
> >> >> CONFIDENTIALITY NOTICE
> >> >> NOTICE: This message is intended for the use of the individual or
> >> entity to
> >> >>
> >> >> which it is addressed and may contain information that is
> >> >> confidential,
> >> >> privileged and exempt from disclosure under applicable law. If the
> >> reader
> >> >> of this message is not the intended recipient, you are hereby
> notified
> >> that
> >> >>
> >> >> any printing, copying, dissemination, distribution, disclosure or
> >> >> forwarding of this communication is strictly prohibited. If you have
> >> >> received this communication in error, please contact the sender
> >> immediately
> >> >>
> >> >> and delete it from your system. Thank You.
> >> >>
> >>
> >> --
> >> CONFIDENTIALITY NOTICE
> >> NOTICE: This message is intended for the use of the individual or entity
> >> to
> >> which it is addressed and may contain information that is confidential,
> >> privileged and exempt from disclosure under applicable law. If the
> reader
> >> of this message is not the intended recipient, you are hereby notified
> >> that
> >> any printing, copying, dissemination, distribution, disclosure or
> >> forwarding of this communication is strictly prohibited. If you have
> >> received this communication in error, please contact the sender
> >> immediately
> >> and delete it from your system. Thank You.
> >>
> >
>

Re: hadoop 2.0.5 datanode heartbeat issue

Posted by orahad bigdata <or...@gmail.com>.
Thanks Jitendra,

I have restarted my DataNode and suddenly it works for me :) now it's
connected to both NN's.

Do you know why this issue occurred?

Thanks



On Sat, Aug 31, 2013 at 1:24 AM, Jitendra Yadav
<je...@gmail.com>wrote:

> Hi,
>
> However your conf looks fine but I would say that you should  restart
> your DN once and check your NN weburl.
>
> Regards
> Jitendra
>
> On 8/31/13, orahad bigdata <or...@gmail.com> wrote:
> > here is my conf files.
> >
> > -----------core-site.xml-----------
> > <configuration>
> > <property>
> >   <name>fs.defaultFS</name>
> >   <value>hdfs://orahadoop</value>
> > </property>
> > <property>
> >   <name>dfs.journalnode.edits.dir</name>
> >   <value>/u0/journal/node/local/data</value>
> > </property>
> > </configuration>
> >
> > ------------ hdfs-site.xml-------------
> > <configuration>
> > <property>
> >   <name>dfs.nameservices</name>
> >   <value>orahadoop</value>
> > </property>
> > <property>
> >   <name>dfs.ha.namenodes.orahadoop</name>
> > <value>node1,node2</value>
> > </property>
> > <property>
> >   <name>dfs.namenode.rpc-address.orahadoop.node1</name>
> >   <value>clone1:8020</value>
> > </property>
> > <property>
> >   <name>dfs.namenode.rpc-address.orahadoop.node2</name>
> >   <value>clone2:8020</value>
> > </property>
> > <property>
> >   <name>dfs.namenode.http-address.orahadoop.node1</name>
> >   <value>clone1:50070</value>
> > </property>
> > <property>
> >   <name>dfs.namenode.http-address.orahadoop.node2</name>
> >   <value>clone2:50070</value>
> > </property>
> > <property>
> >   <name>dfs.namenode.shared.edits.dir</name>
> >   <value>qjournal://clone3:8485;clone1:8485;clone2:8485/orahadoop</value>
> > </property>
> > <property>
> >   <name>dfs.client.failover.proxy.provider.orahadoop</name>
> >
> >
> <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
> > </property>
> > </configuration>
> >
> > --------- mapred-site.xml -------------
> >
> > <configuration>
> > <property>
> >     <name>mapreduce.framework.name</name>
> >     <value>classic</value>
> >   </property>
> > </configuration>
> >
> >
> >
> >
> >
> >
> >
> > On Sat, Aug 31, 2013 at 12:21 AM, Jing Zhao <ji...@hortonworks.com>
> wrote:
> >
> >> Another possibility I can imagine is that the old configuration
> >> property "fs.default.name" is still in your configuration with a
> >> single NN's host+ip as its value. In that case this bad value may
> >> overwrite the value of fs.defaultFS.
> >>
> >> It may be helpful if you can post your configurations.
> >>
> >> On Fri, Aug 30, 2013 at 11:32 AM, orahad bigdata <or...@gmail.com>
> >> wrote:
> >> > Thanks Jing,
> >> >
> >> > I'm using same configuration files at datanode side.
> >> >
> >> > dfs.nameservices -> orahadoop (hdfs-site.xml)
> >> >
> >> > fs.defaultFS -> hdfs://orahadoop (core-site.xml)
> >> >
> >> > Thanks
> >> > On 8/30/13, Jing Zhao <ji...@hortonworks.com> wrote:
> >> >> You may need to make sure the configuration of your DN has also been
> >> >> updated for HA. If your DN's configuration still uses the old URL
> >> >> (e.g., one of your NN's host+port) for "fs.defaultFS", DN will only
> >> >> connect to that NN.
> >> >>
> >> >> On Fri, Aug 30, 2013 at 10:56 AM, orahad bigdata <
> oraclehad@gmail.com>
> >> >> wrote:
> >> >>> Hi All,
> >> >>>
> >> >>> I'm using Hadoop 2.0.5 HA with QJM, After starting the cluster I did
> >> >>> some manual switch overs between NN.Then after I opened WEBUI page
> >> >>> for
> >> >>> both the NN, I saw some strange situation where my DN connected to
> >> >>> standby NN but not sending the heartbeat to primary NameNode .
> >> >>>
> >> >>> please guide.
> >> >>>
> >> >>> Thanks
> >> >>
> >> >> --
> >> >> CONFIDENTIALITY NOTICE
> >> >> NOTICE: This message is intended for the use of the individual or
> >> entity to
> >> >>
> >> >> which it is addressed and may contain information that is
> >> >> confidential,
> >> >> privileged and exempt from disclosure under applicable law. If the
> >> reader
> >> >> of this message is not the intended recipient, you are hereby
> notified
> >> that
> >> >>
> >> >> any printing, copying, dissemination, distribution, disclosure or
> >> >> forwarding of this communication is strictly prohibited. If you have
> >> >> received this communication in error, please contact the sender
> >> immediately
> >> >>
> >> >> and delete it from your system. Thank You.
> >> >>
> >>
> >> --
> >> CONFIDENTIALITY NOTICE
> >> NOTICE: This message is intended for the use of the individual or entity
> >> to
> >> which it is addressed and may contain information that is confidential,
> >> privileged and exempt from disclosure under applicable law. If the
> reader
> >> of this message is not the intended recipient, you are hereby notified
> >> that
> >> any printing, copying, dissemination, distribution, disclosure or
> >> forwarding of this communication is strictly prohibited. If you have
> >> received this communication in error, please contact the sender
> >> immediately
> >> and delete it from your system. Thank You.
> >>
> >
>

Re: hadoop 2.0.5 datanode heartbeat issue

Posted by Jitendra Yadav <je...@gmail.com>.
Hi,

However your conf looks fine but I would say that you should  restart
your DN once and check your NN weburl.

Regards
Jitendra

On 8/31/13, orahad bigdata <or...@gmail.com> wrote:
> here is my conf files.
>
> -----------core-site.xml-----------
> <configuration>
> <property>
>   <name>fs.defaultFS</name>
>   <value>hdfs://orahadoop</value>
> </property>
> <property>
>   <name>dfs.journalnode.edits.dir</name>
>   <value>/u0/journal/node/local/data</value>
> </property>
> </configuration>
>
> ------------ hdfs-site.xml-------------
> <configuration>
> <property>
>   <name>dfs.nameservices</name>
>   <value>orahadoop</value>
> </property>
> <property>
>   <name>dfs.ha.namenodes.orahadoop</name>
> <value>node1,node2</value>
> </property>
> <property>
>   <name>dfs.namenode.rpc-address.orahadoop.node1</name>
>   <value>clone1:8020</value>
> </property>
> <property>
>   <name>dfs.namenode.rpc-address.orahadoop.node2</name>
>   <value>clone2:8020</value>
> </property>
> <property>
>   <name>dfs.namenode.http-address.orahadoop.node1</name>
>   <value>clone1:50070</value>
> </property>
> <property>
>   <name>dfs.namenode.http-address.orahadoop.node2</name>
>   <value>clone2:50070</value>
> </property>
> <property>
>   <name>dfs.namenode.shared.edits.dir</name>
>   <value>qjournal://clone3:8485;clone1:8485;clone2:8485/orahadoop</value>
> </property>
> <property>
>   <name>dfs.client.failover.proxy.provider.orahadoop</name>
>
> <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
> </property>
> </configuration>
>
> --------- mapred-site.xml -------------
>
> <configuration>
> <property>
>     <name>mapreduce.framework.name</name>
>     <value>classic</value>
>   </property>
> </configuration>
>
>
>
>
>
>
>
> On Sat, Aug 31, 2013 at 12:21 AM, Jing Zhao <ji...@hortonworks.com> wrote:
>
>> Another possibility I can imagine is that the old configuration
>> property "fs.default.name" is still in your configuration with a
>> single NN's host+ip as its value. In that case this bad value may
>> overwrite the value of fs.defaultFS.
>>
>> It may be helpful if you can post your configurations.
>>
>> On Fri, Aug 30, 2013 at 11:32 AM, orahad bigdata <or...@gmail.com>
>> wrote:
>> > Thanks Jing,
>> >
>> > I'm using same configuration files at datanode side.
>> >
>> > dfs.nameservices -> orahadoop (hdfs-site.xml)
>> >
>> > fs.defaultFS -> hdfs://orahadoop (core-site.xml)
>> >
>> > Thanks
>> > On 8/30/13, Jing Zhao <ji...@hortonworks.com> wrote:
>> >> You may need to make sure the configuration of your DN has also been
>> >> updated for HA. If your DN's configuration still uses the old URL
>> >> (e.g., one of your NN's host+port) for "fs.defaultFS", DN will only
>> >> connect to that NN.
>> >>
>> >> On Fri, Aug 30, 2013 at 10:56 AM, orahad bigdata <or...@gmail.com>
>> >> wrote:
>> >>> Hi All,
>> >>>
>> >>> I'm using Hadoop 2.0.5 HA with QJM, After starting the cluster I did
>> >>> some manual switch overs between NN.Then after I opened WEBUI page
>> >>> for
>> >>> both the NN, I saw some strange situation where my DN connected to
>> >>> standby NN but not sending the heartbeat to primary NameNode .
>> >>>
>> >>> please guide.
>> >>>
>> >>> Thanks
>> >>
>> >> --
>> >> CONFIDENTIALITY NOTICE
>> >> NOTICE: This message is intended for the use of the individual or
>> entity to
>> >>
>> >> which it is addressed and may contain information that is
>> >> confidential,
>> >> privileged and exempt from disclosure under applicable law. If the
>> reader
>> >> of this message is not the intended recipient, you are hereby notified
>> that
>> >>
>> >> any printing, copying, dissemination, distribution, disclosure or
>> >> forwarding of this communication is strictly prohibited. If you have
>> >> received this communication in error, please contact the sender
>> immediately
>> >>
>> >> and delete it from your system. Thank You.
>> >>
>>
>> --
>> CONFIDENTIALITY NOTICE
>> NOTICE: This message is intended for the use of the individual or entity
>> to
>> which it is addressed and may contain information that is confidential,
>> privileged and exempt from disclosure under applicable law. If the reader
>> of this message is not the intended recipient, you are hereby notified
>> that
>> any printing, copying, dissemination, distribution, disclosure or
>> forwarding of this communication is strictly prohibited. If you have
>> received this communication in error, please contact the sender
>> immediately
>> and delete it from your system. Thank You.
>>
>

Re: hadoop 2.0.5 datanode heartbeat issue

Posted by Jitendra Yadav <je...@gmail.com>.
Hi,

However your conf looks fine but I would say that you should  restart
your DN once and check your NN weburl.

Regards
Jitendra

On 8/31/13, orahad bigdata <or...@gmail.com> wrote:
> here is my conf files.
>
> -----------core-site.xml-----------
> <configuration>
> <property>
>   <name>fs.defaultFS</name>
>   <value>hdfs://orahadoop</value>
> </property>
> <property>
>   <name>dfs.journalnode.edits.dir</name>
>   <value>/u0/journal/node/local/data</value>
> </property>
> </configuration>
>
> ------------ hdfs-site.xml-------------
> <configuration>
> <property>
>   <name>dfs.nameservices</name>
>   <value>orahadoop</value>
> </property>
> <property>
>   <name>dfs.ha.namenodes.orahadoop</name>
> <value>node1,node2</value>
> </property>
> <property>
>   <name>dfs.namenode.rpc-address.orahadoop.node1</name>
>   <value>clone1:8020</value>
> </property>
> <property>
>   <name>dfs.namenode.rpc-address.orahadoop.node2</name>
>   <value>clone2:8020</value>
> </property>
> <property>
>   <name>dfs.namenode.http-address.orahadoop.node1</name>
>   <value>clone1:50070</value>
> </property>
> <property>
>   <name>dfs.namenode.http-address.orahadoop.node2</name>
>   <value>clone2:50070</value>
> </property>
> <property>
>   <name>dfs.namenode.shared.edits.dir</name>
>   <value>qjournal://clone3:8485;clone1:8485;clone2:8485/orahadoop</value>
> </property>
> <property>
>   <name>dfs.client.failover.proxy.provider.orahadoop</name>
>
> <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
> </property>
> </configuration>
>
> --------- mapred-site.xml -------------
>
> <configuration>
> <property>
>     <name>mapreduce.framework.name</name>
>     <value>classic</value>
>   </property>
> </configuration>
>
>
>
>
>
>
>
> On Sat, Aug 31, 2013 at 12:21 AM, Jing Zhao <ji...@hortonworks.com> wrote:
>
>> Another possibility I can imagine is that the old configuration
>> property "fs.default.name" is still in your configuration with a
>> single NN's host+ip as its value. In that case this bad value may
>> overwrite the value of fs.defaultFS.
>>
>> It may be helpful if you can post your configurations.
>>
>> On Fri, Aug 30, 2013 at 11:32 AM, orahad bigdata <or...@gmail.com>
>> wrote:
>> > Thanks Jing,
>> >
>> > I'm using same configuration files at datanode side.
>> >
>> > dfs.nameservices -> orahadoop (hdfs-site.xml)
>> >
>> > fs.defaultFS -> hdfs://orahadoop (core-site.xml)
>> >
>> > Thanks
>> > On 8/30/13, Jing Zhao <ji...@hortonworks.com> wrote:
>> >> You may need to make sure the configuration of your DN has also been
>> >> updated for HA. If your DN's configuration still uses the old URL
>> >> (e.g., one of your NN's host+port) for "fs.defaultFS", DN will only
>> >> connect to that NN.
>> >>
>> >> On Fri, Aug 30, 2013 at 10:56 AM, orahad bigdata <or...@gmail.com>
>> >> wrote:
>> >>> Hi All,
>> >>>
>> >>> I'm using Hadoop 2.0.5 HA with QJM, After starting the cluster I did
>> >>> some manual switch overs between NN.Then after I opened WEBUI page
>> >>> for
>> >>> both the NN, I saw some strange situation where my DN connected to
>> >>> standby NN but not sending the heartbeat to primary NameNode .
>> >>>
>> >>> please guide.
>> >>>
>> >>> Thanks
>> >>
>> >> --
>> >> CONFIDENTIALITY NOTICE
>> >> NOTICE: This message is intended for the use of the individual or
>> entity to
>> >>
>> >> which it is addressed and may contain information that is
>> >> confidential,
>> >> privileged and exempt from disclosure under applicable law. If the
>> reader
>> >> of this message is not the intended recipient, you are hereby notified
>> that
>> >>
>> >> any printing, copying, dissemination, distribution, disclosure or
>> >> forwarding of this communication is strictly prohibited. If you have
>> >> received this communication in error, please contact the sender
>> immediately
>> >>
>> >> and delete it from your system. Thank You.
>> >>
>>
>> --
>> CONFIDENTIALITY NOTICE
>> NOTICE: This message is intended for the use of the individual or entity
>> to
>> which it is addressed and may contain information that is confidential,
>> privileged and exempt from disclosure under applicable law. If the reader
>> of this message is not the intended recipient, you are hereby notified
>> that
>> any printing, copying, dissemination, distribution, disclosure or
>> forwarding of this communication is strictly prohibited. If you have
>> received this communication in error, please contact the sender
>> immediately
>> and delete it from your system. Thank You.
>>
>

Re: hadoop 2.0.5 datanode heartbeat issue

Posted by Jitendra Yadav <je...@gmail.com>.
Hi,

However your conf looks fine but I would say that you should  restart
your DN once and check your NN weburl.

Regards
Jitendra

On 8/31/13, orahad bigdata <or...@gmail.com> wrote:
> here is my conf files.
>
> -----------core-site.xml-----------
> <configuration>
> <property>
>   <name>fs.defaultFS</name>
>   <value>hdfs://orahadoop</value>
> </property>
> <property>
>   <name>dfs.journalnode.edits.dir</name>
>   <value>/u0/journal/node/local/data</value>
> </property>
> </configuration>
>
> ------------ hdfs-site.xml-------------
> <configuration>
> <property>
>   <name>dfs.nameservices</name>
>   <value>orahadoop</value>
> </property>
> <property>
>   <name>dfs.ha.namenodes.orahadoop</name>
> <value>node1,node2</value>
> </property>
> <property>
>   <name>dfs.namenode.rpc-address.orahadoop.node1</name>
>   <value>clone1:8020</value>
> </property>
> <property>
>   <name>dfs.namenode.rpc-address.orahadoop.node2</name>
>   <value>clone2:8020</value>
> </property>
> <property>
>   <name>dfs.namenode.http-address.orahadoop.node1</name>
>   <value>clone1:50070</value>
> </property>
> <property>
>   <name>dfs.namenode.http-address.orahadoop.node2</name>
>   <value>clone2:50070</value>
> </property>
> <property>
>   <name>dfs.namenode.shared.edits.dir</name>
>   <value>qjournal://clone3:8485;clone1:8485;clone2:8485/orahadoop</value>
> </property>
> <property>
>   <name>dfs.client.failover.proxy.provider.orahadoop</name>
>
> <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
> </property>
> </configuration>
>
> --------- mapred-site.xml -------------
>
> <configuration>
> <property>
>     <name>mapreduce.framework.name</name>
>     <value>classic</value>
>   </property>
> </configuration>
>
>
>
>
>
>
>
> On Sat, Aug 31, 2013 at 12:21 AM, Jing Zhao <ji...@hortonworks.com> wrote:
>
>> Another possibility I can imagine is that the old configuration
>> property "fs.default.name" is still in your configuration with a
>> single NN's host+ip as its value. In that case this bad value may
>> overwrite the value of fs.defaultFS.
>>
>> It may be helpful if you can post your configurations.
>>
>> On Fri, Aug 30, 2013 at 11:32 AM, orahad bigdata <or...@gmail.com>
>> wrote:
>> > Thanks Jing,
>> >
>> > I'm using same configuration files at datanode side.
>> >
>> > dfs.nameservices -> orahadoop (hdfs-site.xml)
>> >
>> > fs.defaultFS -> hdfs://orahadoop (core-site.xml)
>> >
>> > Thanks
>> > On 8/30/13, Jing Zhao <ji...@hortonworks.com> wrote:
>> >> You may need to make sure the configuration of your DN has also been
>> >> updated for HA. If your DN's configuration still uses the old URL
>> >> (e.g., one of your NN's host+port) for "fs.defaultFS", DN will only
>> >> connect to that NN.
>> >>
>> >> On Fri, Aug 30, 2013 at 10:56 AM, orahad bigdata <or...@gmail.com>
>> >> wrote:
>> >>> Hi All,
>> >>>
>> >>> I'm using Hadoop 2.0.5 HA with QJM, After starting the cluster I did
>> >>> some manual switch overs between NN.Then after I opened WEBUI page
>> >>> for
>> >>> both the NN, I saw some strange situation where my DN connected to
>> >>> standby NN but not sending the heartbeat to primary NameNode .
>> >>>
>> >>> please guide.
>> >>>
>> >>> Thanks
>> >>
>> >> --
>> >> CONFIDENTIALITY NOTICE
>> >> NOTICE: This message is intended for the use of the individual or
>> entity to
>> >>
>> >> which it is addressed and may contain information that is
>> >> confidential,
>> >> privileged and exempt from disclosure under applicable law. If the
>> reader
>> >> of this message is not the intended recipient, you are hereby notified
>> that
>> >>
>> >> any printing, copying, dissemination, distribution, disclosure or
>> >> forwarding of this communication is strictly prohibited. If you have
>> >> received this communication in error, please contact the sender
>> immediately
>> >>
>> >> and delete it from your system. Thank You.
>> >>
>>
>> --
>> CONFIDENTIALITY NOTICE
>> NOTICE: This message is intended for the use of the individual or entity
>> to
>> which it is addressed and may contain information that is confidential,
>> privileged and exempt from disclosure under applicable law. If the reader
>> of this message is not the intended recipient, you are hereby notified
>> that
>> any printing, copying, dissemination, distribution, disclosure or
>> forwarding of this communication is strictly prohibited. If you have
>> received this communication in error, please contact the sender
>> immediately
>> and delete it from your system. Thank You.
>>
>

Re: hadoop 2.0.5 datanode heartbeat issue

Posted by Jitendra Yadav <je...@gmail.com>.
Hi,

However your conf looks fine but I would say that you should  restart
your DN once and check your NN weburl.

Regards
Jitendra

On 8/31/13, orahad bigdata <or...@gmail.com> wrote:
> here is my conf files.
>
> -----------core-site.xml-----------
> <configuration>
> <property>
>   <name>fs.defaultFS</name>
>   <value>hdfs://orahadoop</value>
> </property>
> <property>
>   <name>dfs.journalnode.edits.dir</name>
>   <value>/u0/journal/node/local/data</value>
> </property>
> </configuration>
>
> ------------ hdfs-site.xml-------------
> <configuration>
> <property>
>   <name>dfs.nameservices</name>
>   <value>orahadoop</value>
> </property>
> <property>
>   <name>dfs.ha.namenodes.orahadoop</name>
> <value>node1,node2</value>
> </property>
> <property>
>   <name>dfs.namenode.rpc-address.orahadoop.node1</name>
>   <value>clone1:8020</value>
> </property>
> <property>
>   <name>dfs.namenode.rpc-address.orahadoop.node2</name>
>   <value>clone2:8020</value>
> </property>
> <property>
>   <name>dfs.namenode.http-address.orahadoop.node1</name>
>   <value>clone1:50070</value>
> </property>
> <property>
>   <name>dfs.namenode.http-address.orahadoop.node2</name>
>   <value>clone2:50070</value>
> </property>
> <property>
>   <name>dfs.namenode.shared.edits.dir</name>
>   <value>qjournal://clone3:8485;clone1:8485;clone2:8485/orahadoop</value>
> </property>
> <property>
>   <name>dfs.client.failover.proxy.provider.orahadoop</name>
>
> <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
> </property>
> </configuration>
>
> --------- mapred-site.xml -------------
>
> <configuration>
> <property>
>     <name>mapreduce.framework.name</name>
>     <value>classic</value>
>   </property>
> </configuration>
>
>
>
>
>
>
>
> On Sat, Aug 31, 2013 at 12:21 AM, Jing Zhao <ji...@hortonworks.com> wrote:
>
>> Another possibility I can imagine is that the old configuration
>> property "fs.default.name" is still in your configuration with a
>> single NN's host+ip as its value. In that case this bad value may
>> overwrite the value of fs.defaultFS.
>>
>> It may be helpful if you can post your configurations.
>>
>> On Fri, Aug 30, 2013 at 11:32 AM, orahad bigdata <or...@gmail.com>
>> wrote:
>> > Thanks Jing,
>> >
>> > I'm using same configuration files at datanode side.
>> >
>> > dfs.nameservices -> orahadoop (hdfs-site.xml)
>> >
>> > fs.defaultFS -> hdfs://orahadoop (core-site.xml)
>> >
>> > Thanks
>> > On 8/30/13, Jing Zhao <ji...@hortonworks.com> wrote:
>> >> You may need to make sure the configuration of your DN has also been
>> >> updated for HA. If your DN's configuration still uses the old URL
>> >> (e.g., one of your NN's host+port) for "fs.defaultFS", DN will only
>> >> connect to that NN.
>> >>
>> >> On Fri, Aug 30, 2013 at 10:56 AM, orahad bigdata <or...@gmail.com>
>> >> wrote:
>> >>> Hi All,
>> >>>
>> >>> I'm using Hadoop 2.0.5 HA with QJM, After starting the cluster I did
>> >>> some manual switch overs between NN.Then after I opened WEBUI page
>> >>> for
>> >>> both the NN, I saw some strange situation where my DN connected to
>> >>> standby NN but not sending the heartbeat to primary NameNode .
>> >>>
>> >>> please guide.
>> >>>
>> >>> Thanks
>> >>
>> >> --
>> >> CONFIDENTIALITY NOTICE
>> >> NOTICE: This message is intended for the use of the individual or
>> entity to
>> >>
>> >> which it is addressed and may contain information that is
>> >> confidential,
>> >> privileged and exempt from disclosure under applicable law. If the
>> reader
>> >> of this message is not the intended recipient, you are hereby notified
>> that
>> >>
>> >> any printing, copying, dissemination, distribution, disclosure or
>> >> forwarding of this communication is strictly prohibited. If you have
>> >> received this communication in error, please contact the sender
>> immediately
>> >>
>> >> and delete it from your system. Thank You.
>> >>
>>
>> --
>> CONFIDENTIALITY NOTICE
>> NOTICE: This message is intended for the use of the individual or entity
>> to
>> which it is addressed and may contain information that is confidential,
>> privileged and exempt from disclosure under applicable law. If the reader
>> of this message is not the intended recipient, you are hereby notified
>> that
>> any printing, copying, dissemination, distribution, disclosure or
>> forwarding of this communication is strictly prohibited. If you have
>> received this communication in error, please contact the sender
>> immediately
>> and delete it from your system. Thank You.
>>
>

Re: hadoop 2.0.5 datanode heartbeat issue

Posted by orahad bigdata <or...@gmail.com>.
here is my conf files.

-----------core-site.xml-----------
<configuration>
<property>
  <name>fs.defaultFS</name>
  <value>hdfs://orahadoop</value>
</property>
<property>
  <name>dfs.journalnode.edits.dir</name>
  <value>/u0/journal/node/local/data</value>
</property>
</configuration>

------------ hdfs-site.xml-------------
<configuration>
<property>
  <name>dfs.nameservices</name>
  <value>orahadoop</value>
</property>
<property>
  <name>dfs.ha.namenodes.orahadoop</name>
<value>node1,node2</value>
</property>
<property>
  <name>dfs.namenode.rpc-address.orahadoop.node1</name>
  <value>clone1:8020</value>
</property>
<property>
  <name>dfs.namenode.rpc-address.orahadoop.node2</name>
  <value>clone2:8020</value>
</property>
<property>
  <name>dfs.namenode.http-address.orahadoop.node1</name>
  <value>clone1:50070</value>
</property>
<property>
  <name>dfs.namenode.http-address.orahadoop.node2</name>
  <value>clone2:50070</value>
</property>
<property>
  <name>dfs.namenode.shared.edits.dir</name>
  <value>qjournal://clone3:8485;clone1:8485;clone2:8485/orahadoop</value>
</property>
<property>
  <name>dfs.client.failover.proxy.provider.orahadoop</name>

<value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
</property>
</configuration>

--------- mapred-site.xml -------------

<configuration>
<property>
    <name>mapreduce.framework.name</name>
    <value>classic</value>
  </property>
</configuration>







On Sat, Aug 31, 2013 at 12:21 AM, Jing Zhao <ji...@hortonworks.com> wrote:

> Another possibility I can imagine is that the old configuration
> property "fs.default.name" is still in your configuration with a
> single NN's host+ip as its value. In that case this bad value may
> overwrite the value of fs.defaultFS.
>
> It may be helpful if you can post your configurations.
>
> On Fri, Aug 30, 2013 at 11:32 AM, orahad bigdata <or...@gmail.com>
> wrote:
> > Thanks Jing,
> >
> > I'm using same configuration files at datanode side.
> >
> > dfs.nameservices -> orahadoop (hdfs-site.xml)
> >
> > fs.defaultFS -> hdfs://orahadoop (core-site.xml)
> >
> > Thanks
> > On 8/30/13, Jing Zhao <ji...@hortonworks.com> wrote:
> >> You may need to make sure the configuration of your DN has also been
> >> updated for HA. If your DN's configuration still uses the old URL
> >> (e.g., one of your NN's host+port) for "fs.defaultFS", DN will only
> >> connect to that NN.
> >>
> >> On Fri, Aug 30, 2013 at 10:56 AM, orahad bigdata <or...@gmail.com>
> >> wrote:
> >>> Hi All,
> >>>
> >>> I'm using Hadoop 2.0.5 HA with QJM, After starting the cluster I did
> >>> some manual switch overs between NN.Then after I opened WEBUI page for
> >>> both the NN, I saw some strange situation where my DN connected to
> >>> standby NN but not sending the heartbeat to primary NameNode .
> >>>
> >>> please guide.
> >>>
> >>> Thanks
> >>
> >> --
> >> CONFIDENTIALITY NOTICE
> >> NOTICE: This message is intended for the use of the individual or
> entity to
> >>
> >> which it is addressed and may contain information that is confidential,
> >> privileged and exempt from disclosure under applicable law. If the
> reader
> >> of this message is not the intended recipient, you are hereby notified
> that
> >>
> >> any printing, copying, dissemination, distribution, disclosure or
> >> forwarding of this communication is strictly prohibited. If you have
> >> received this communication in error, please contact the sender
> immediately
> >>
> >> and delete it from your system. Thank You.
> >>
>
> --
> CONFIDENTIALITY NOTICE
> NOTICE: This message is intended for the use of the individual or entity to
> which it is addressed and may contain information that is confidential,
> privileged and exempt from disclosure under applicable law. If the reader
> of this message is not the intended recipient, you are hereby notified that
> any printing, copying, dissemination, distribution, disclosure or
> forwarding of this communication is strictly prohibited. If you have
> received this communication in error, please contact the sender immediately
> and delete it from your system. Thank You.
>

Re: hadoop 2.0.5 datanode heartbeat issue

Posted by orahad bigdata <or...@gmail.com>.
here is my conf files.

-----------core-site.xml-----------
<configuration>
<property>
  <name>fs.defaultFS</name>
  <value>hdfs://orahadoop</value>
</property>
<property>
  <name>dfs.journalnode.edits.dir</name>
  <value>/u0/journal/node/local/data</value>
</property>
</configuration>

------------ hdfs-site.xml-------------
<configuration>
<property>
  <name>dfs.nameservices</name>
  <value>orahadoop</value>
</property>
<property>
  <name>dfs.ha.namenodes.orahadoop</name>
<value>node1,node2</value>
</property>
<property>
  <name>dfs.namenode.rpc-address.orahadoop.node1</name>
  <value>clone1:8020</value>
</property>
<property>
  <name>dfs.namenode.rpc-address.orahadoop.node2</name>
  <value>clone2:8020</value>
</property>
<property>
  <name>dfs.namenode.http-address.orahadoop.node1</name>
  <value>clone1:50070</value>
</property>
<property>
  <name>dfs.namenode.http-address.orahadoop.node2</name>
  <value>clone2:50070</value>
</property>
<property>
  <name>dfs.namenode.shared.edits.dir</name>
  <value>qjournal://clone3:8485;clone1:8485;clone2:8485/orahadoop</value>
</property>
<property>
  <name>dfs.client.failover.proxy.provider.orahadoop</name>

<value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
</property>
</configuration>

--------- mapred-site.xml -------------

<configuration>
<property>
    <name>mapreduce.framework.name</name>
    <value>classic</value>
  </property>
</configuration>







On Sat, Aug 31, 2013 at 12:21 AM, Jing Zhao <ji...@hortonworks.com> wrote:

> Another possibility I can imagine is that the old configuration
> property "fs.default.name" is still in your configuration with a
> single NN's host+ip as its value. In that case this bad value may
> overwrite the value of fs.defaultFS.
>
> It may be helpful if you can post your configurations.
>
> On Fri, Aug 30, 2013 at 11:32 AM, orahad bigdata <or...@gmail.com>
> wrote:
> > Thanks Jing,
> >
> > I'm using same configuration files at datanode side.
> >
> > dfs.nameservices -> orahadoop (hdfs-site.xml)
> >
> > fs.defaultFS -> hdfs://orahadoop (core-site.xml)
> >
> > Thanks
> > On 8/30/13, Jing Zhao <ji...@hortonworks.com> wrote:
> >> You may need to make sure the configuration of your DN has also been
> >> updated for HA. If your DN's configuration still uses the old URL
> >> (e.g., one of your NN's host+port) for "fs.defaultFS", DN will only
> >> connect to that NN.
> >>
> >> On Fri, Aug 30, 2013 at 10:56 AM, orahad bigdata <or...@gmail.com>
> >> wrote:
> >>> Hi All,
> >>>
> >>> I'm using Hadoop 2.0.5 HA with QJM, After starting the cluster I did
> >>> some manual switch overs between NN.Then after I opened WEBUI page for
> >>> both the NN, I saw some strange situation where my DN connected to
> >>> standby NN but not sending the heartbeat to primary NameNode .
> >>>
> >>> please guide.
> >>>
> >>> Thanks
> >>
> >> --
> >> CONFIDENTIALITY NOTICE
> >> NOTICE: This message is intended for the use of the individual or
> entity to
> >>
> >> which it is addressed and may contain information that is confidential,
> >> privileged and exempt from disclosure under applicable law. If the
> reader
> >> of this message is not the intended recipient, you are hereby notified
> that
> >>
> >> any printing, copying, dissemination, distribution, disclosure or
> >> forwarding of this communication is strictly prohibited. If you have
> >> received this communication in error, please contact the sender
> immediately
> >>
> >> and delete it from your system. Thank You.
> >>
>
> --
> CONFIDENTIALITY NOTICE
> NOTICE: This message is intended for the use of the individual or entity to
> which it is addressed and may contain information that is confidential,
> privileged and exempt from disclosure under applicable law. If the reader
> of this message is not the intended recipient, you are hereby notified that
> any printing, copying, dissemination, distribution, disclosure or
> forwarding of this communication is strictly prohibited. If you have
> received this communication in error, please contact the sender immediately
> and delete it from your system. Thank You.
>

Re: hadoop 2.0.5 datanode heartbeat issue

Posted by orahad bigdata <or...@gmail.com>.
here is my conf files.

-----------core-site.xml-----------
<configuration>
<property>
  <name>fs.defaultFS</name>
  <value>hdfs://orahadoop</value>
</property>
<property>
  <name>dfs.journalnode.edits.dir</name>
  <value>/u0/journal/node/local/data</value>
</property>
</configuration>

------------ hdfs-site.xml-------------
<configuration>
<property>
  <name>dfs.nameservices</name>
  <value>orahadoop</value>
</property>
<property>
  <name>dfs.ha.namenodes.orahadoop</name>
<value>node1,node2</value>
</property>
<property>
  <name>dfs.namenode.rpc-address.orahadoop.node1</name>
  <value>clone1:8020</value>
</property>
<property>
  <name>dfs.namenode.rpc-address.orahadoop.node2</name>
  <value>clone2:8020</value>
</property>
<property>
  <name>dfs.namenode.http-address.orahadoop.node1</name>
  <value>clone1:50070</value>
</property>
<property>
  <name>dfs.namenode.http-address.orahadoop.node2</name>
  <value>clone2:50070</value>
</property>
<property>
  <name>dfs.namenode.shared.edits.dir</name>
  <value>qjournal://clone3:8485;clone1:8485;clone2:8485/orahadoop</value>
</property>
<property>
  <name>dfs.client.failover.proxy.provider.orahadoop</name>

<value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
</property>
</configuration>

--------- mapred-site.xml -------------

<configuration>
<property>
    <name>mapreduce.framework.name</name>
    <value>classic</value>
  </property>
</configuration>







On Sat, Aug 31, 2013 at 12:21 AM, Jing Zhao <ji...@hortonworks.com> wrote:

> Another possibility I can imagine is that the old configuration
> property "fs.default.name" is still in your configuration with a
> single NN's host+ip as its value. In that case this bad value may
> overwrite the value of fs.defaultFS.
>
> It may be helpful if you can post your configurations.
>
> On Fri, Aug 30, 2013 at 11:32 AM, orahad bigdata <or...@gmail.com>
> wrote:
> > Thanks Jing,
> >
> > I'm using same configuration files at datanode side.
> >
> > dfs.nameservices -> orahadoop (hdfs-site.xml)
> >
> > fs.defaultFS -> hdfs://orahadoop (core-site.xml)
> >
> > Thanks
> > On 8/30/13, Jing Zhao <ji...@hortonworks.com> wrote:
> >> You may need to make sure the configuration of your DN has also been
> >> updated for HA. If your DN's configuration still uses the old URL
> >> (e.g., one of your NN's host+port) for "fs.defaultFS", DN will only
> >> connect to that NN.
> >>
> >> On Fri, Aug 30, 2013 at 10:56 AM, orahad bigdata <or...@gmail.com>
> >> wrote:
> >>> Hi All,
> >>>
> >>> I'm using Hadoop 2.0.5 HA with QJM, After starting the cluster I did
> >>> some manual switch overs between NN.Then after I opened WEBUI page for
> >>> both the NN, I saw some strange situation where my DN connected to
> >>> standby NN but not sending the heartbeat to primary NameNode .
> >>>
> >>> please guide.
> >>>
> >>> Thanks
> >>
> >> --
> >> CONFIDENTIALITY NOTICE
> >> NOTICE: This message is intended for the use of the individual or
> entity to
> >>
> >> which it is addressed and may contain information that is confidential,
> >> privileged and exempt from disclosure under applicable law. If the
> reader
> >> of this message is not the intended recipient, you are hereby notified
> that
> >>
> >> any printing, copying, dissemination, distribution, disclosure or
> >> forwarding of this communication is strictly prohibited. If you have
> >> received this communication in error, please contact the sender
> immediately
> >>
> >> and delete it from your system. Thank You.
> >>
>
> --
> CONFIDENTIALITY NOTICE
> NOTICE: This message is intended for the use of the individual or entity to
> which it is addressed and may contain information that is confidential,
> privileged and exempt from disclosure under applicable law. If the reader
> of this message is not the intended recipient, you are hereby notified that
> any printing, copying, dissemination, distribution, disclosure or
> forwarding of this communication is strictly prohibited. If you have
> received this communication in error, please contact the sender immediately
> and delete it from your system. Thank You.
>

Re: hadoop 2.0.5 datanode heartbeat issue

Posted by orahad bigdata <or...@gmail.com>.
here is my conf files.

-----------core-site.xml-----------
<configuration>
<property>
  <name>fs.defaultFS</name>
  <value>hdfs://orahadoop</value>
</property>
<property>
  <name>dfs.journalnode.edits.dir</name>
  <value>/u0/journal/node/local/data</value>
</property>
</configuration>

------------ hdfs-site.xml-------------
<configuration>
<property>
  <name>dfs.nameservices</name>
  <value>orahadoop</value>
</property>
<property>
  <name>dfs.ha.namenodes.orahadoop</name>
<value>node1,node2</value>
</property>
<property>
  <name>dfs.namenode.rpc-address.orahadoop.node1</name>
  <value>clone1:8020</value>
</property>
<property>
  <name>dfs.namenode.rpc-address.orahadoop.node2</name>
  <value>clone2:8020</value>
</property>
<property>
  <name>dfs.namenode.http-address.orahadoop.node1</name>
  <value>clone1:50070</value>
</property>
<property>
  <name>dfs.namenode.http-address.orahadoop.node2</name>
  <value>clone2:50070</value>
</property>
<property>
  <name>dfs.namenode.shared.edits.dir</name>
  <value>qjournal://clone3:8485;clone1:8485;clone2:8485/orahadoop</value>
</property>
<property>
  <name>dfs.client.failover.proxy.provider.orahadoop</name>

<value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
</property>
</configuration>

--------- mapred-site.xml -------------

<configuration>
<property>
    <name>mapreduce.framework.name</name>
    <value>classic</value>
  </property>
</configuration>







On Sat, Aug 31, 2013 at 12:21 AM, Jing Zhao <ji...@hortonworks.com> wrote:

> Another possibility I can imagine is that the old configuration
> property "fs.default.name" is still in your configuration with a
> single NN's host+ip as its value. In that case this bad value may
> overwrite the value of fs.defaultFS.
>
> It may be helpful if you can post your configurations.
>
> On Fri, Aug 30, 2013 at 11:32 AM, orahad bigdata <or...@gmail.com>
> wrote:
> > Thanks Jing,
> >
> > I'm using same configuration files at datanode side.
> >
> > dfs.nameservices -> orahadoop (hdfs-site.xml)
> >
> > fs.defaultFS -> hdfs://orahadoop (core-site.xml)
> >
> > Thanks
> > On 8/30/13, Jing Zhao <ji...@hortonworks.com> wrote:
> >> You may need to make sure the configuration of your DN has also been
> >> updated for HA. If your DN's configuration still uses the old URL
> >> (e.g., one of your NN's host+port) for "fs.defaultFS", DN will only
> >> connect to that NN.
> >>
> >> On Fri, Aug 30, 2013 at 10:56 AM, orahad bigdata <or...@gmail.com>
> >> wrote:
> >>> Hi All,
> >>>
> >>> I'm using Hadoop 2.0.5 HA with QJM, After starting the cluster I did
> >>> some manual switch overs between NN.Then after I opened WEBUI page for
> >>> both the NN, I saw some strange situation where my DN connected to
> >>> standby NN but not sending the heartbeat to primary NameNode .
> >>>
> >>> please guide.
> >>>
> >>> Thanks
> >>
> >> --
> >> CONFIDENTIALITY NOTICE
> >> NOTICE: This message is intended for the use of the individual or
> entity to
> >>
> >> which it is addressed and may contain information that is confidential,
> >> privileged and exempt from disclosure under applicable law. If the
> reader
> >> of this message is not the intended recipient, you are hereby notified
> that
> >>
> >> any printing, copying, dissemination, distribution, disclosure or
> >> forwarding of this communication is strictly prohibited. If you have
> >> received this communication in error, please contact the sender
> immediately
> >>
> >> and delete it from your system. Thank You.
> >>
>
> --
> CONFIDENTIALITY NOTICE
> NOTICE: This message is intended for the use of the individual or entity to
> which it is addressed and may contain information that is confidential,
> privileged and exempt from disclosure under applicable law. If the reader
> of this message is not the intended recipient, you are hereby notified that
> any printing, copying, dissemination, distribution, disclosure or
> forwarding of this communication is strictly prohibited. If you have
> received this communication in error, please contact the sender immediately
> and delete it from your system. Thank You.
>

Re: hadoop 2.0.5 datanode heartbeat issue

Posted by Jing Zhao <ji...@hortonworks.com>.
Another possibility I can imagine is that the old configuration
property "fs.default.name" is still in your configuration with a
single NN's host+ip as its value. In that case this bad value may
overwrite the value of fs.defaultFS.

It may be helpful if you can post your configurations.

On Fri, Aug 30, 2013 at 11:32 AM, orahad bigdata <or...@gmail.com> wrote:
> Thanks Jing,
>
> I'm using same configuration files at datanode side.
>
> dfs.nameservices -> orahadoop (hdfs-site.xml)
>
> fs.defaultFS -> hdfs://orahadoop (core-site.xml)
>
> Thanks
> On 8/30/13, Jing Zhao <ji...@hortonworks.com> wrote:
>> You may need to make sure the configuration of your DN has also been
>> updated for HA. If your DN's configuration still uses the old URL
>> (e.g., one of your NN's host+port) for "fs.defaultFS", DN will only
>> connect to that NN.
>>
>> On Fri, Aug 30, 2013 at 10:56 AM, orahad bigdata <or...@gmail.com>
>> wrote:
>>> Hi All,
>>>
>>> I'm using Hadoop 2.0.5 HA with QJM, After starting the cluster I did
>>> some manual switch overs between NN.Then after I opened WEBUI page for
>>> both the NN, I saw some strange situation where my DN connected to
>>> standby NN but not sending the heartbeat to primary NameNode .
>>>
>>> please guide.
>>>
>>> Thanks
>>
>> --
>> CONFIDENTIALITY NOTICE
>> NOTICE: This message is intended for the use of the individual or entity to
>>
>> which it is addressed and may contain information that is confidential,
>> privileged and exempt from disclosure under applicable law. If the reader
>> of this message is not the intended recipient, you are hereby notified that
>>
>> any printing, copying, dissemination, distribution, disclosure or
>> forwarding of this communication is strictly prohibited. If you have
>> received this communication in error, please contact the sender immediately
>>
>> and delete it from your system. Thank You.
>>

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.

Re: hadoop 2.0.5 datanode heartbeat issue

Posted by Jing Zhao <ji...@hortonworks.com>.
Another possibility I can imagine is that the old configuration
property "fs.default.name" is still in your configuration with a
single NN's host+ip as its value. In that case this bad value may
overwrite the value of fs.defaultFS.

It may be helpful if you can post your configurations.

On Fri, Aug 30, 2013 at 11:32 AM, orahad bigdata <or...@gmail.com> wrote:
> Thanks Jing,
>
> I'm using same configuration files at datanode side.
>
> dfs.nameservices -> orahadoop (hdfs-site.xml)
>
> fs.defaultFS -> hdfs://orahadoop (core-site.xml)
>
> Thanks
> On 8/30/13, Jing Zhao <ji...@hortonworks.com> wrote:
>> You may need to make sure the configuration of your DN has also been
>> updated for HA. If your DN's configuration still uses the old URL
>> (e.g., one of your NN's host+port) for "fs.defaultFS", DN will only
>> connect to that NN.
>>
>> On Fri, Aug 30, 2013 at 10:56 AM, orahad bigdata <or...@gmail.com>
>> wrote:
>>> Hi All,
>>>
>>> I'm using Hadoop 2.0.5 HA with QJM, After starting the cluster I did
>>> some manual switch overs between NN.Then after I opened WEBUI page for
>>> both the NN, I saw some strange situation where my DN connected to
>>> standby NN but not sending the heartbeat to primary NameNode .
>>>
>>> please guide.
>>>
>>> Thanks
>>
>> --
>> CONFIDENTIALITY NOTICE
>> NOTICE: This message is intended for the use of the individual or entity to
>>
>> which it is addressed and may contain information that is confidential,
>> privileged and exempt from disclosure under applicable law. If the reader
>> of this message is not the intended recipient, you are hereby notified that
>>
>> any printing, copying, dissemination, distribution, disclosure or
>> forwarding of this communication is strictly prohibited. If you have
>> received this communication in error, please contact the sender immediately
>>
>> and delete it from your system. Thank You.
>>

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.

Re: hadoop 2.0.5 datanode heartbeat issue

Posted by Jing Zhao <ji...@hortonworks.com>.
Another possibility I can imagine is that the old configuration
property "fs.default.name" is still in your configuration with a
single NN's host+ip as its value. In that case this bad value may
overwrite the value of fs.defaultFS.

It may be helpful if you can post your configurations.

On Fri, Aug 30, 2013 at 11:32 AM, orahad bigdata <or...@gmail.com> wrote:
> Thanks Jing,
>
> I'm using same configuration files at datanode side.
>
> dfs.nameservices -> orahadoop (hdfs-site.xml)
>
> fs.defaultFS -> hdfs://orahadoop (core-site.xml)
>
> Thanks
> On 8/30/13, Jing Zhao <ji...@hortonworks.com> wrote:
>> You may need to make sure the configuration of your DN has also been
>> updated for HA. If your DN's configuration still uses the old URL
>> (e.g., one of your NN's host+port) for "fs.defaultFS", DN will only
>> connect to that NN.
>>
>> On Fri, Aug 30, 2013 at 10:56 AM, orahad bigdata <or...@gmail.com>
>> wrote:
>>> Hi All,
>>>
>>> I'm using Hadoop 2.0.5 HA with QJM, After starting the cluster I did
>>> some manual switch overs between NN.Then after I opened WEBUI page for
>>> both the NN, I saw some strange situation where my DN connected to
>>> standby NN but not sending the heartbeat to primary NameNode .
>>>
>>> please guide.
>>>
>>> Thanks
>>
>> --
>> CONFIDENTIALITY NOTICE
>> NOTICE: This message is intended for the use of the individual or entity to
>>
>> which it is addressed and may contain information that is confidential,
>> privileged and exempt from disclosure under applicable law. If the reader
>> of this message is not the intended recipient, you are hereby notified that
>>
>> any printing, copying, dissemination, distribution, disclosure or
>> forwarding of this communication is strictly prohibited. If you have
>> received this communication in error, please contact the sender immediately
>>
>> and delete it from your system. Thank You.
>>

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.

Re: hadoop 2.0.5 datanode heartbeat issue

Posted by Jing Zhao <ji...@hortonworks.com>.
Another possibility I can imagine is that the old configuration
property "fs.default.name" is still in your configuration with a
single NN's host+ip as its value. In that case this bad value may
overwrite the value of fs.defaultFS.

It may be helpful if you can post your configurations.

On Fri, Aug 30, 2013 at 11:32 AM, orahad bigdata <or...@gmail.com> wrote:
> Thanks Jing,
>
> I'm using same configuration files at datanode side.
>
> dfs.nameservices -> orahadoop (hdfs-site.xml)
>
> fs.defaultFS -> hdfs://orahadoop (core-site.xml)
>
> Thanks
> On 8/30/13, Jing Zhao <ji...@hortonworks.com> wrote:
>> You may need to make sure the configuration of your DN has also been
>> updated for HA. If your DN's configuration still uses the old URL
>> (e.g., one of your NN's host+port) for "fs.defaultFS", DN will only
>> connect to that NN.
>>
>> On Fri, Aug 30, 2013 at 10:56 AM, orahad bigdata <or...@gmail.com>
>> wrote:
>>> Hi All,
>>>
>>> I'm using Hadoop 2.0.5 HA with QJM, After starting the cluster I did
>>> some manual switch overs between NN.Then after I opened WEBUI page for
>>> both the NN, I saw some strange situation where my DN connected to
>>> standby NN but not sending the heartbeat to primary NameNode .
>>>
>>> please guide.
>>>
>>> Thanks
>>
>> --
>> CONFIDENTIALITY NOTICE
>> NOTICE: This message is intended for the use of the individual or entity to
>>
>> which it is addressed and may contain information that is confidential,
>> privileged and exempt from disclosure under applicable law. If the reader
>> of this message is not the intended recipient, you are hereby notified that
>>
>> any printing, copying, dissemination, distribution, disclosure or
>> forwarding of this communication is strictly prohibited. If you have
>> received this communication in error, please contact the sender immediately
>>
>> and delete it from your system. Thank You.
>>

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.

Re: hadoop 2.0.5 datanode heartbeat issue

Posted by orahad bigdata <or...@gmail.com>.
Thanks Jing,

I'm using same configuration files at datanode side.

dfs.nameservices -> orahadoop (hdfs-site.xml)

fs.defaultFS -> hdfs://orahadoop (core-site.xml)

Thanks
On 8/30/13, Jing Zhao <ji...@hortonworks.com> wrote:
> You may need to make sure the configuration of your DN has also been
> updated for HA. If your DN's configuration still uses the old URL
> (e.g., one of your NN's host+port) for "fs.defaultFS", DN will only
> connect to that NN.
>
> On Fri, Aug 30, 2013 at 10:56 AM, orahad bigdata <or...@gmail.com>
> wrote:
>> Hi All,
>>
>> I'm using Hadoop 2.0.5 HA with QJM, After starting the cluster I did
>> some manual switch overs between NN.Then after I opened WEBUI page for
>> both the NN, I saw some strange situation where my DN connected to
>> standby NN but not sending the heartbeat to primary NameNode .
>>
>> please guide.
>>
>> Thanks
>
> --
> CONFIDENTIALITY NOTICE
> NOTICE: This message is intended for the use of the individual or entity to
>
> which it is addressed and may contain information that is confidential,
> privileged and exempt from disclosure under applicable law. If the reader
> of this message is not the intended recipient, you are hereby notified that
>
> any printing, copying, dissemination, distribution, disclosure or
> forwarding of this communication is strictly prohibited. If you have
> received this communication in error, please contact the sender immediately
>
> and delete it from your system. Thank You.
>

Re: hadoop 2.0.5 datanode heartbeat issue

Posted by orahad bigdata <or...@gmail.com>.
Thanks Jing,

I'm using same configuration files at datanode side.

dfs.nameservices -> orahadoop (hdfs-site.xml)

fs.defaultFS -> hdfs://orahadoop (core-site.xml)

Thanks
On 8/30/13, Jing Zhao <ji...@hortonworks.com> wrote:
> You may need to make sure the configuration of your DN has also been
> updated for HA. If your DN's configuration still uses the old URL
> (e.g., one of your NN's host+port) for "fs.defaultFS", DN will only
> connect to that NN.
>
> On Fri, Aug 30, 2013 at 10:56 AM, orahad bigdata <or...@gmail.com>
> wrote:
>> Hi All,
>>
>> I'm using Hadoop 2.0.5 HA with QJM, After starting the cluster I did
>> some manual switch overs between NN.Then after I opened WEBUI page for
>> both the NN, I saw some strange situation where my DN connected to
>> standby NN but not sending the heartbeat to primary NameNode .
>>
>> please guide.
>>
>> Thanks
>
> --
> CONFIDENTIALITY NOTICE
> NOTICE: This message is intended for the use of the individual or entity to
>
> which it is addressed and may contain information that is confidential,
> privileged and exempt from disclosure under applicable law. If the reader
> of this message is not the intended recipient, you are hereby notified that
>
> any printing, copying, dissemination, distribution, disclosure or
> forwarding of this communication is strictly prohibited. If you have
> received this communication in error, please contact the sender immediately
>
> and delete it from your system. Thank You.
>

Re: hadoop 2.0.5 datanode heartbeat issue

Posted by orahad bigdata <or...@gmail.com>.
Thanks Jing,

I'm using same configuration files at datanode side.

dfs.nameservices -> orahadoop (hdfs-site.xml)

fs.defaultFS -> hdfs://orahadoop (core-site.xml)

Thanks
On 8/30/13, Jing Zhao <ji...@hortonworks.com> wrote:
> You may need to make sure the configuration of your DN has also been
> updated for HA. If your DN's configuration still uses the old URL
> (e.g., one of your NN's host+port) for "fs.defaultFS", DN will only
> connect to that NN.
>
> On Fri, Aug 30, 2013 at 10:56 AM, orahad bigdata <or...@gmail.com>
> wrote:
>> Hi All,
>>
>> I'm using Hadoop 2.0.5 HA with QJM, After starting the cluster I did
>> some manual switch overs between NN.Then after I opened WEBUI page for
>> both the NN, I saw some strange situation where my DN connected to
>> standby NN but not sending the heartbeat to primary NameNode .
>>
>> please guide.
>>
>> Thanks
>
> --
> CONFIDENTIALITY NOTICE
> NOTICE: This message is intended for the use of the individual or entity to
>
> which it is addressed and may contain information that is confidential,
> privileged and exempt from disclosure under applicable law. If the reader
> of this message is not the intended recipient, you are hereby notified that
>
> any printing, copying, dissemination, distribution, disclosure or
> forwarding of this communication is strictly prohibited. If you have
> received this communication in error, please contact the sender immediately
>
> and delete it from your system. Thank You.
>

Re: hadoop 2.0.5 datanode heartbeat issue

Posted by orahad bigdata <or...@gmail.com>.
Thanks Jing,

I'm using same configuration files at datanode side.

dfs.nameservices -> orahadoop (hdfs-site.xml)

fs.defaultFS -> hdfs://orahadoop (core-site.xml)

Thanks
On 8/30/13, Jing Zhao <ji...@hortonworks.com> wrote:
> You may need to make sure the configuration of your DN has also been
> updated for HA. If your DN's configuration still uses the old URL
> (e.g., one of your NN's host+port) for "fs.defaultFS", DN will only
> connect to that NN.
>
> On Fri, Aug 30, 2013 at 10:56 AM, orahad bigdata <or...@gmail.com>
> wrote:
>> Hi All,
>>
>> I'm using Hadoop 2.0.5 HA with QJM, After starting the cluster I did
>> some manual switch overs between NN.Then after I opened WEBUI page for
>> both the NN, I saw some strange situation where my DN connected to
>> standby NN but not sending the heartbeat to primary NameNode .
>>
>> please guide.
>>
>> Thanks
>
> --
> CONFIDENTIALITY NOTICE
> NOTICE: This message is intended for the use of the individual or entity to
>
> which it is addressed and may contain information that is confidential,
> privileged and exempt from disclosure under applicable law. If the reader
> of this message is not the intended recipient, you are hereby notified that
>
> any printing, copying, dissemination, distribution, disclosure or
> forwarding of this communication is strictly prohibited. If you have
> received this communication in error, please contact the sender immediately
>
> and delete it from your system. Thank You.
>

Re: hadoop 2.0.5 datanode heartbeat issue

Posted by Jing Zhao <ji...@hortonworks.com>.
You may need to make sure the configuration of your DN has also been
updated for HA. If your DN's configuration still uses the old URL
(e.g., one of your NN's host+port) for "fs.defaultFS", DN will only
connect to that NN.

On Fri, Aug 30, 2013 at 10:56 AM, orahad bigdata <or...@gmail.com> wrote:
> Hi All,
>
> I'm using Hadoop 2.0.5 HA with QJM, After starting the cluster I did
> some manual switch overs between NN.Then after I opened WEBUI page for
> both the NN, I saw some strange situation where my DN connected to
> standby NN but not sending the heartbeat to primary NameNode .
>
> please guide.
>
> Thanks

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.

Re: hadoop 2.0.5 datanode heartbeat issue

Posted by Jing Zhao <ji...@hortonworks.com>.
You may need to make sure the configuration of your DN has also been
updated for HA. If your DN's configuration still uses the old URL
(e.g., one of your NN's host+port) for "fs.defaultFS", DN will only
connect to that NN.

On Fri, Aug 30, 2013 at 10:56 AM, orahad bigdata <or...@gmail.com> wrote:
> Hi All,
>
> I'm using Hadoop 2.0.5 HA with QJM, After starting the cluster I did
> some manual switch overs between NN.Then after I opened WEBUI page for
> both the NN, I saw some strange situation where my DN connected to
> standby NN but not sending the heartbeat to primary NameNode .
>
> please guide.
>
> Thanks

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.

Re: hadoop 2.0.5 datanode heartbeat issue

Posted by Jing Zhao <ji...@hortonworks.com>.
You may need to make sure the configuration of your DN has also been
updated for HA. If your DN's configuration still uses the old URL
(e.g., one of your NN's host+port) for "fs.defaultFS", DN will only
connect to that NN.

On Fri, Aug 30, 2013 at 10:56 AM, orahad bigdata <or...@gmail.com> wrote:
> Hi All,
>
> I'm using Hadoop 2.0.5 HA with QJM, After starting the cluster I did
> some manual switch overs between NN.Then after I opened WEBUI page for
> both the NN, I saw some strange situation where my DN connected to
> standby NN but not sending the heartbeat to primary NameNode .
>
> please guide.
>
> Thanks

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.

Re: hadoop 2.0.5 datanode heartbeat issue

Posted by Jing Zhao <ji...@hortonworks.com>.
You may need to make sure the configuration of your DN has also been
updated for HA. If your DN's configuration still uses the old URL
(e.g., one of your NN's host+port) for "fs.defaultFS", DN will only
connect to that NN.

On Fri, Aug 30, 2013 at 10:56 AM, orahad bigdata <or...@gmail.com> wrote:
> Hi All,
>
> I'm using Hadoop 2.0.5 HA with QJM, After starting the cluster I did
> some manual switch overs between NN.Then after I opened WEBUI page for
> both the NN, I saw some strange situation where my DN connected to
> standby NN but not sending the heartbeat to primary NameNode .
>
> please guide.
>
> Thanks

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.