You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-user@hadoop.apache.org by sam liu <sa...@gmail.com> on 2014/04/29 15:07:24 UTC

For QJM HA solution, after failover, application must update NameNode IP?

Hi Experts,

For example, at the beginning, the application will access NameNode using
IP of active NameNode(IP: 9.123.22.1).  However, after failover, the IP of
active NameNode is changed to 9.123.22.2 which is the IP of previous
standby NameNode. In this case, application must update NameNode IP?

Thanks!

Re: For QJM HA solution, after failover, application must update NameNode IP?

Posted by Aitor Perez Cedres <ap...@pragsis.com>.
Hi,

Just change the /fs.defaultFS/ property in /core-site.xml/ to connect to 
logical name:

    /<property>//
    //    <name>fs.defaultFS</name>//
    //    <value>hdfs://MYCLUSTER:8020</value>//
    //    <final>true</final>//
    //</property>/

HDFS Client will know which NN it has to connect.

Hope it helps,
Aitor

On 29/04/14 16:07, sam liu wrote:
> Hi Bryan,
>
> Thanks for your detailed response!
>
> - 'you use a logical name for your "group of namenodes"': In your 
> case, it should be 'MYCLUSTER'
>
> - 'provide a means for the client to handle connecting to the 
> currently active one': *Could you pls give an example?*
>
>
>
>
> 2014-04-29 21:57 GMT+08:00 Bryan Beaudreault <bbeaudreault@hubspot.com 
> <ma...@hubspot.com>>:
>
>     If you are using the QJM HA solution, the IP addresses of the
>     namenodes should not change.  Instead your clients should be
>     connecting using the proper HA configurations.  That is, you use a
>     logical name for your "group of namenodes", and provide a means
>     for the client to handle connecting to the currently active one.
>
>     Example:
>
>     <property>
>           <name>dfs.nameservices</name>
>           <value>MYCLUSTER</value>
>       </property>
>
>       <property>
>     <name>dfs.ha.namenodes.MYCLUSTER</name>
>           <value>nn1,nn2</value>
>       </property>
>
>       <property>
>     <name>dfs.namenode.rpc-address.MYCLUSTER.nn1</name>
>     <value>dnsOfNameNode1:8020</value>
>       </property>
>       <property>
>     <name>dfs.namenode.http-address.MYCLUSTER.nn1</name>
>     <value> dnsOfNameNode1:50070</value>
>       </property>
>
>       <property>
>     <name>dfs.namenode.rpc-address.MYCLUSTER.nn2</name>
>           <value> dnsOfNameNode2:8020</value>
>       </property>
>       <property>
>     <name>dfs.namenode.http-address.MYCLUSTER.nn2</name>
>     <value> dnsOfNameNode2:50070</value>
>       </property>
>
>       <property>
>     <name>dfs.client.failover.proxy.provider.gilbert-prod</name>
>     <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
>       </property>
>
>
>     On Tue, Apr 29, 2014 at 9:07 AM, sam liu <samliuhadoop@gmail.com
>     <ma...@gmail.com>> wrote:
>
>         Hi Experts,
>
>         For example, at the beginning, the application will access
>         NameNode using IP of active NameNode(IP: 9.123.22.1). 
>         However, after failover, the IP of active NameNode is changed
>         to 9.123.22.2 which is the IP of previous standby NameNode. In
>         this case, application must update NameNode IP?
>
>         Thanks!
>
>
>

-- 
*Aitor Pérez*
/Big Data System Engineer/

Telf.: +34 917 680 490
Fax: +34 913 833 301
C/Manuel Tovar, 49-53 - 28034 Madrid - Spain

_http://www.bidoop.es_


Re: For QJM HA solution, after failover, application must update NameNode IP?

Posted by Aitor Perez Cedres <ap...@pragsis.com>.
Hi,

Just change the /fs.defaultFS/ property in /core-site.xml/ to connect to 
logical name:

    /<property>//
    //    <name>fs.defaultFS</name>//
    //    <value>hdfs://MYCLUSTER:8020</value>//
    //    <final>true</final>//
    //</property>/

HDFS Client will know which NN it has to connect.

Hope it helps,
Aitor

On 29/04/14 16:07, sam liu wrote:
> Hi Bryan,
>
> Thanks for your detailed response!
>
> - 'you use a logical name for your "group of namenodes"': In your 
> case, it should be 'MYCLUSTER'
>
> - 'provide a means for the client to handle connecting to the 
> currently active one': *Could you pls give an example?*
>
>
>
>
> 2014-04-29 21:57 GMT+08:00 Bryan Beaudreault <bbeaudreault@hubspot.com 
> <ma...@hubspot.com>>:
>
>     If you are using the QJM HA solution, the IP addresses of the
>     namenodes should not change.  Instead your clients should be
>     connecting using the proper HA configurations.  That is, you use a
>     logical name for your "group of namenodes", and provide a means
>     for the client to handle connecting to the currently active one.
>
>     Example:
>
>     <property>
>           <name>dfs.nameservices</name>
>           <value>MYCLUSTER</value>
>       </property>
>
>       <property>
>     <name>dfs.ha.namenodes.MYCLUSTER</name>
>           <value>nn1,nn2</value>
>       </property>
>
>       <property>
>     <name>dfs.namenode.rpc-address.MYCLUSTER.nn1</name>
>     <value>dnsOfNameNode1:8020</value>
>       </property>
>       <property>
>     <name>dfs.namenode.http-address.MYCLUSTER.nn1</name>
>     <value> dnsOfNameNode1:50070</value>
>       </property>
>
>       <property>
>     <name>dfs.namenode.rpc-address.MYCLUSTER.nn2</name>
>           <value> dnsOfNameNode2:8020</value>
>       </property>
>       <property>
>     <name>dfs.namenode.http-address.MYCLUSTER.nn2</name>
>     <value> dnsOfNameNode2:50070</value>
>       </property>
>
>       <property>
>     <name>dfs.client.failover.proxy.provider.gilbert-prod</name>
>     <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
>       </property>
>
>
>     On Tue, Apr 29, 2014 at 9:07 AM, sam liu <samliuhadoop@gmail.com
>     <ma...@gmail.com>> wrote:
>
>         Hi Experts,
>
>         For example, at the beginning, the application will access
>         NameNode using IP of active NameNode(IP: 9.123.22.1). 
>         However, after failover, the IP of active NameNode is changed
>         to 9.123.22.2 which is the IP of previous standby NameNode. In
>         this case, application must update NameNode IP?
>
>         Thanks!
>
>
>

-- 
*Aitor Pérez*
/Big Data System Engineer/

Telf.: +34 917 680 490
Fax: +34 913 833 301
C/Manuel Tovar, 49-53 - 28034 Madrid - Spain

_http://www.bidoop.es_


Re: For QJM HA solution, after failover, application must update NameNode IP?

Posted by Aitor Perez Cedres <ap...@pragsis.com>.
Hi,

Just change the /fs.defaultFS/ property in /core-site.xml/ to connect to 
logical name:

    /<property>//
    //    <name>fs.defaultFS</name>//
    //    <value>hdfs://MYCLUSTER:8020</value>//
    //    <final>true</final>//
    //</property>/

HDFS Client will know which NN it has to connect.

Hope it helps,
Aitor

On 29/04/14 16:07, sam liu wrote:
> Hi Bryan,
>
> Thanks for your detailed response!
>
> - 'you use a logical name for your "group of namenodes"': In your 
> case, it should be 'MYCLUSTER'
>
> - 'provide a means for the client to handle connecting to the 
> currently active one': *Could you pls give an example?*
>
>
>
>
> 2014-04-29 21:57 GMT+08:00 Bryan Beaudreault <bbeaudreault@hubspot.com 
> <ma...@hubspot.com>>:
>
>     If you are using the QJM HA solution, the IP addresses of the
>     namenodes should not change.  Instead your clients should be
>     connecting using the proper HA configurations.  That is, you use a
>     logical name for your "group of namenodes", and provide a means
>     for the client to handle connecting to the currently active one.
>
>     Example:
>
>     <property>
>           <name>dfs.nameservices</name>
>           <value>MYCLUSTER</value>
>       </property>
>
>       <property>
>     <name>dfs.ha.namenodes.MYCLUSTER</name>
>           <value>nn1,nn2</value>
>       </property>
>
>       <property>
>     <name>dfs.namenode.rpc-address.MYCLUSTER.nn1</name>
>     <value>dnsOfNameNode1:8020</value>
>       </property>
>       <property>
>     <name>dfs.namenode.http-address.MYCLUSTER.nn1</name>
>     <value> dnsOfNameNode1:50070</value>
>       </property>
>
>       <property>
>     <name>dfs.namenode.rpc-address.MYCLUSTER.nn2</name>
>           <value> dnsOfNameNode2:8020</value>
>       </property>
>       <property>
>     <name>dfs.namenode.http-address.MYCLUSTER.nn2</name>
>     <value> dnsOfNameNode2:50070</value>
>       </property>
>
>       <property>
>     <name>dfs.client.failover.proxy.provider.gilbert-prod</name>
>     <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
>       </property>
>
>
>     On Tue, Apr 29, 2014 at 9:07 AM, sam liu <samliuhadoop@gmail.com
>     <ma...@gmail.com>> wrote:
>
>         Hi Experts,
>
>         For example, at the beginning, the application will access
>         NameNode using IP of active NameNode(IP: 9.123.22.1). 
>         However, after failover, the IP of active NameNode is changed
>         to 9.123.22.2 which is the IP of previous standby NameNode. In
>         this case, application must update NameNode IP?
>
>         Thanks!
>
>
>

-- 
*Aitor Pérez*
/Big Data System Engineer/

Telf.: +34 917 680 490
Fax: +34 913 833 301
C/Manuel Tovar, 49-53 - 28034 Madrid - Spain

_http://www.bidoop.es_


Re: For QJM HA solution, after failover, application must update NameNode IP?

Posted by Aitor Perez Cedres <ap...@pragsis.com>.
Hi,

Just change the /fs.defaultFS/ property in /core-site.xml/ to connect to 
logical name:

    /<property>//
    //    <name>fs.defaultFS</name>//
    //    <value>hdfs://MYCLUSTER:8020</value>//
    //    <final>true</final>//
    //</property>/

HDFS Client will know which NN it has to connect.

Hope it helps,
Aitor

On 29/04/14 16:07, sam liu wrote:
> Hi Bryan,
>
> Thanks for your detailed response!
>
> - 'you use a logical name for your "group of namenodes"': In your 
> case, it should be 'MYCLUSTER'
>
> - 'provide a means for the client to handle connecting to the 
> currently active one': *Could you pls give an example?*
>
>
>
>
> 2014-04-29 21:57 GMT+08:00 Bryan Beaudreault <bbeaudreault@hubspot.com 
> <ma...@hubspot.com>>:
>
>     If you are using the QJM HA solution, the IP addresses of the
>     namenodes should not change.  Instead your clients should be
>     connecting using the proper HA configurations.  That is, you use a
>     logical name for your "group of namenodes", and provide a means
>     for the client to handle connecting to the currently active one.
>
>     Example:
>
>     <property>
>           <name>dfs.nameservices</name>
>           <value>MYCLUSTER</value>
>       </property>
>
>       <property>
>     <name>dfs.ha.namenodes.MYCLUSTER</name>
>           <value>nn1,nn2</value>
>       </property>
>
>       <property>
>     <name>dfs.namenode.rpc-address.MYCLUSTER.nn1</name>
>     <value>dnsOfNameNode1:8020</value>
>       </property>
>       <property>
>     <name>dfs.namenode.http-address.MYCLUSTER.nn1</name>
>     <value> dnsOfNameNode1:50070</value>
>       </property>
>
>       <property>
>     <name>dfs.namenode.rpc-address.MYCLUSTER.nn2</name>
>           <value> dnsOfNameNode2:8020</value>
>       </property>
>       <property>
>     <name>dfs.namenode.http-address.MYCLUSTER.nn2</name>
>     <value> dnsOfNameNode2:50070</value>
>       </property>
>
>       <property>
>     <name>dfs.client.failover.proxy.provider.gilbert-prod</name>
>     <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
>       </property>
>
>
>     On Tue, Apr 29, 2014 at 9:07 AM, sam liu <samliuhadoop@gmail.com
>     <ma...@gmail.com>> wrote:
>
>         Hi Experts,
>
>         For example, at the beginning, the application will access
>         NameNode using IP of active NameNode(IP: 9.123.22.1). 
>         However, after failover, the IP of active NameNode is changed
>         to 9.123.22.2 which is the IP of previous standby NameNode. In
>         this case, application must update NameNode IP?
>
>         Thanks!
>
>
>

-- 
*Aitor Pérez*
/Big Data System Engineer/

Telf.: +34 917 680 490
Fax: +34 913 833 301
C/Manuel Tovar, 49-53 - 28034 Madrid - Spain

_http://www.bidoop.es_


Re: For QJM HA solution, after failover, application must update NameNode IP?

Posted by Harsh J <ha...@cloudera.com>.
Hi Sam,

Bryan meant the last config bit:

 <property>
      <name>dfs.client.failover.proxy.provider.gilbert-prod</name>
      <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
  </property>

This is the class the client will use to perform the failover (i.e.
active NN discovery).

On Tue, Apr 29, 2014 at 7:37 PM, sam liu <sa...@gmail.com> wrote:
> Hi Bryan,
>
> Thanks for your detailed response!
>
> - 'you use a logical name for your "group of namenodes"': In your case, it
> should be 'MYCLUSTER'
>
> - 'provide a means for the client to handle connecting to the currently
> active one': Could you pls give an example?
>
>
>
>
> 2014-04-29 21:57 GMT+08:00 Bryan Beaudreault <bb...@hubspot.com>:
>
>> If you are using the QJM HA solution, the IP addresses of the namenodes
>> should not change.  Instead your clients should be connecting using the
>> proper HA configurations.  That is, you use a logical name for your "group
>> of namenodes", and provide a means for the client to handle connecting to
>> the currently active one.
>>
>> Example:
>>
>> <property>
>>       <name>dfs.nameservices</name>
>>       <value>MYCLUSTER</value>
>>   </property>
>>
>>   <property>
>>       <name>dfs.ha.namenodes.MYCLUSTER</name>
>>       <value>nn1,nn2</value>
>>   </property>
>>
>>   <property>
>>       <name>dfs.namenode.rpc-address.MYCLUSTER.nn1</name>
>>       <value>dnsOfNameNode1:8020</value>
>>   </property>
>>   <property>
>>       <name>dfs.namenode.http-address.MYCLUSTER.nn1</name>
>>       <value> dnsOfNameNode1:50070</value>
>>   </property>
>>
>>   <property>
>>       <name>dfs.namenode.rpc-address.MYCLUSTER.nn2</name>
>>       <value> dnsOfNameNode2:8020</value>
>>   </property>
>>   <property>
>>       <name>dfs.namenode.http-address.MYCLUSTER.nn2</name>
>>       <value> dnsOfNameNode2:50070</value>
>>   </property>
>>
>>   <property>
>>       <name>dfs.client.failover.proxy.provider.gilbert-prod</name>
>>
>> <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
>>   </property>
>>
>>
>> On Tue, Apr 29, 2014 at 9:07 AM, sam liu <sa...@gmail.com> wrote:
>>>
>>> Hi Experts,
>>>
>>> For example, at the beginning, the application will access NameNode using
>>> IP of active NameNode(IP: 9.123.22.1).  However, after failover, the IP of
>>> active NameNode is changed to 9.123.22.2 which is the IP of previous standby
>>> NameNode. In this case, application must update NameNode IP?
>>>
>>> Thanks!
>>
>>
>



-- 
Harsh J

Re: For QJM HA solution, after failover, application must update NameNode IP?

Posted by Harsh J <ha...@cloudera.com>.
Hi Sam,

Bryan meant the last config bit:

 <property>
      <name>dfs.client.failover.proxy.provider.gilbert-prod</name>
      <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
  </property>

This is the class the client will use to perform the failover (i.e.
active NN discovery).

On Tue, Apr 29, 2014 at 7:37 PM, sam liu <sa...@gmail.com> wrote:
> Hi Bryan,
>
> Thanks for your detailed response!
>
> - 'you use a logical name for your "group of namenodes"': In your case, it
> should be 'MYCLUSTER'
>
> - 'provide a means for the client to handle connecting to the currently
> active one': Could you pls give an example?
>
>
>
>
> 2014-04-29 21:57 GMT+08:00 Bryan Beaudreault <bb...@hubspot.com>:
>
>> If you are using the QJM HA solution, the IP addresses of the namenodes
>> should not change.  Instead your clients should be connecting using the
>> proper HA configurations.  That is, you use a logical name for your "group
>> of namenodes", and provide a means for the client to handle connecting to
>> the currently active one.
>>
>> Example:
>>
>> <property>
>>       <name>dfs.nameservices</name>
>>       <value>MYCLUSTER</value>
>>   </property>
>>
>>   <property>
>>       <name>dfs.ha.namenodes.MYCLUSTER</name>
>>       <value>nn1,nn2</value>
>>   </property>
>>
>>   <property>
>>       <name>dfs.namenode.rpc-address.MYCLUSTER.nn1</name>
>>       <value>dnsOfNameNode1:8020</value>
>>   </property>
>>   <property>
>>       <name>dfs.namenode.http-address.MYCLUSTER.nn1</name>
>>       <value> dnsOfNameNode1:50070</value>
>>   </property>
>>
>>   <property>
>>       <name>dfs.namenode.rpc-address.MYCLUSTER.nn2</name>
>>       <value> dnsOfNameNode2:8020</value>
>>   </property>
>>   <property>
>>       <name>dfs.namenode.http-address.MYCLUSTER.nn2</name>
>>       <value> dnsOfNameNode2:50070</value>
>>   </property>
>>
>>   <property>
>>       <name>dfs.client.failover.proxy.provider.gilbert-prod</name>
>>
>> <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
>>   </property>
>>
>>
>> On Tue, Apr 29, 2014 at 9:07 AM, sam liu <sa...@gmail.com> wrote:
>>>
>>> Hi Experts,
>>>
>>> For example, at the beginning, the application will access NameNode using
>>> IP of active NameNode(IP: 9.123.22.1).  However, after failover, the IP of
>>> active NameNode is changed to 9.123.22.2 which is the IP of previous standby
>>> NameNode. In this case, application must update NameNode IP?
>>>
>>> Thanks!
>>
>>
>



-- 
Harsh J

Re: For QJM HA solution, after failover, application must update NameNode IP?

Posted by Harsh J <ha...@cloudera.com>.
Hi Sam,

Bryan meant the last config bit:

 <property>
      <name>dfs.client.failover.proxy.provider.gilbert-prod</name>
      <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
  </property>

This is the class the client will use to perform the failover (i.e.
active NN discovery).

On Tue, Apr 29, 2014 at 7:37 PM, sam liu <sa...@gmail.com> wrote:
> Hi Bryan,
>
> Thanks for your detailed response!
>
> - 'you use a logical name for your "group of namenodes"': In your case, it
> should be 'MYCLUSTER'
>
> - 'provide a means for the client to handle connecting to the currently
> active one': Could you pls give an example?
>
>
>
>
> 2014-04-29 21:57 GMT+08:00 Bryan Beaudreault <bb...@hubspot.com>:
>
>> If you are using the QJM HA solution, the IP addresses of the namenodes
>> should not change.  Instead your clients should be connecting using the
>> proper HA configurations.  That is, you use a logical name for your "group
>> of namenodes", and provide a means for the client to handle connecting to
>> the currently active one.
>>
>> Example:
>>
>> <property>
>>       <name>dfs.nameservices</name>
>>       <value>MYCLUSTER</value>
>>   </property>
>>
>>   <property>
>>       <name>dfs.ha.namenodes.MYCLUSTER</name>
>>       <value>nn1,nn2</value>
>>   </property>
>>
>>   <property>
>>       <name>dfs.namenode.rpc-address.MYCLUSTER.nn1</name>
>>       <value>dnsOfNameNode1:8020</value>
>>   </property>
>>   <property>
>>       <name>dfs.namenode.http-address.MYCLUSTER.nn1</name>
>>       <value> dnsOfNameNode1:50070</value>
>>   </property>
>>
>>   <property>
>>       <name>dfs.namenode.rpc-address.MYCLUSTER.nn2</name>
>>       <value> dnsOfNameNode2:8020</value>
>>   </property>
>>   <property>
>>       <name>dfs.namenode.http-address.MYCLUSTER.nn2</name>
>>       <value> dnsOfNameNode2:50070</value>
>>   </property>
>>
>>   <property>
>>       <name>dfs.client.failover.proxy.provider.gilbert-prod</name>
>>
>> <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
>>   </property>
>>
>>
>> On Tue, Apr 29, 2014 at 9:07 AM, sam liu <sa...@gmail.com> wrote:
>>>
>>> Hi Experts,
>>>
>>> For example, at the beginning, the application will access NameNode using
>>> IP of active NameNode(IP: 9.123.22.1).  However, after failover, the IP of
>>> active NameNode is changed to 9.123.22.2 which is the IP of previous standby
>>> NameNode. In this case, application must update NameNode IP?
>>>
>>> Thanks!
>>
>>
>



-- 
Harsh J

Re: For QJM HA solution, after failover, application must update NameNode IP?

Posted by Harsh J <ha...@cloudera.com>.
Hi Sam,

Bryan meant the last config bit:

 <property>
      <name>dfs.client.failover.proxy.provider.gilbert-prod</name>
      <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
  </property>

This is the class the client will use to perform the failover (i.e.
active NN discovery).

On Tue, Apr 29, 2014 at 7:37 PM, sam liu <sa...@gmail.com> wrote:
> Hi Bryan,
>
> Thanks for your detailed response!
>
> - 'you use a logical name for your "group of namenodes"': In your case, it
> should be 'MYCLUSTER'
>
> - 'provide a means for the client to handle connecting to the currently
> active one': Could you pls give an example?
>
>
>
>
> 2014-04-29 21:57 GMT+08:00 Bryan Beaudreault <bb...@hubspot.com>:
>
>> If you are using the QJM HA solution, the IP addresses of the namenodes
>> should not change.  Instead your clients should be connecting using the
>> proper HA configurations.  That is, you use a logical name for your "group
>> of namenodes", and provide a means for the client to handle connecting to
>> the currently active one.
>>
>> Example:
>>
>> <property>
>>       <name>dfs.nameservices</name>
>>       <value>MYCLUSTER</value>
>>   </property>
>>
>>   <property>
>>       <name>dfs.ha.namenodes.MYCLUSTER</name>
>>       <value>nn1,nn2</value>
>>   </property>
>>
>>   <property>
>>       <name>dfs.namenode.rpc-address.MYCLUSTER.nn1</name>
>>       <value>dnsOfNameNode1:8020</value>
>>   </property>
>>   <property>
>>       <name>dfs.namenode.http-address.MYCLUSTER.nn1</name>
>>       <value> dnsOfNameNode1:50070</value>
>>   </property>
>>
>>   <property>
>>       <name>dfs.namenode.rpc-address.MYCLUSTER.nn2</name>
>>       <value> dnsOfNameNode2:8020</value>
>>   </property>
>>   <property>
>>       <name>dfs.namenode.http-address.MYCLUSTER.nn2</name>
>>       <value> dnsOfNameNode2:50070</value>
>>   </property>
>>
>>   <property>
>>       <name>dfs.client.failover.proxy.provider.gilbert-prod</name>
>>
>> <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
>>   </property>
>>
>>
>> On Tue, Apr 29, 2014 at 9:07 AM, sam liu <sa...@gmail.com> wrote:
>>>
>>> Hi Experts,
>>>
>>> For example, at the beginning, the application will access NameNode using
>>> IP of active NameNode(IP: 9.123.22.1).  However, after failover, the IP of
>>> active NameNode is changed to 9.123.22.2 which is the IP of previous standby
>>> NameNode. In this case, application must update NameNode IP?
>>>
>>> Thanks!
>>
>>
>



-- 
Harsh J

Re: For QJM HA solution, after failover, application must update NameNode IP?

Posted by sam liu <sa...@gmail.com>.
Hi Bryan,

Thanks for your detailed response!

- 'you use a logical name for your "group of namenodes"': In your case, it
should be 'MYCLUSTER'

- 'provide a means for the client to handle connecting to the currently
active one': *Could you pls give an example?*




2014-04-29 21:57 GMT+08:00 Bryan Beaudreault <bb...@hubspot.com>:

> If you are using the QJM HA solution, the IP addresses of the namenodes
> should not change.  Instead your clients should be connecting using the
> proper HA configurations.  That is, you use a logical name for your "group
> of namenodes", and provide a means for the client to handle connecting to
> the currently active one.
>
> Example:
>
> <property>
>       <name>dfs.nameservices</name>
>       <value>MYCLUSTER</value>
>   </property>
>
>   <property>
>       <name>dfs.ha.namenodes.MYCLUSTER</name>
>       <value>nn1,nn2</value>
>   </property>
>
>   <property>
>       <name>dfs.namenode.rpc-address.MYCLUSTER.nn1</name>
>       <value>dnsOfNameNode1:8020</value>
>   </property>
>   <property>
>       <name>dfs.namenode.http-address.MYCLUSTER.nn1</name>
>       <value> dnsOfNameNode1:50070</value>
>   </property>
>
>   <property>
>       <name>dfs.namenode.rpc-address.MYCLUSTER.nn2</name>
>       <value> dnsOfNameNode2:8020</value>
>   </property>
>   <property>
>       <name>dfs.namenode.http-address.MYCLUSTER.nn2</name>
>       <value> dnsOfNameNode2:50070</value>
>   </property>
>
>   <property>
>       <name>dfs.client.failover.proxy.provider.gilbert-prod</name>
>
> <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
>   </property>
>
>
> On Tue, Apr 29, 2014 at 9:07 AM, sam liu <sa...@gmail.com> wrote:
>
>> Hi Experts,
>>
>> For example, at the beginning, the application will access NameNode using
>> IP of active NameNode(IP: 9.123.22.1).  However, after failover, the IP of
>> active NameNode is changed to 9.123.22.2 which is the IP of previous
>> standby NameNode. In this case, application must update NameNode IP?
>>
>> Thanks!
>>
>
>

Re: For QJM HA solution, after failover, application must update NameNode IP?

Posted by sam liu <sa...@gmail.com>.
Hi Bryan,

Thanks for your detailed response!

- 'you use a logical name for your "group of namenodes"': In your case, it
should be 'MYCLUSTER'

- 'provide a means for the client to handle connecting to the currently
active one': *Could you pls give an example?*




2014-04-29 21:57 GMT+08:00 Bryan Beaudreault <bb...@hubspot.com>:

> If you are using the QJM HA solution, the IP addresses of the namenodes
> should not change.  Instead your clients should be connecting using the
> proper HA configurations.  That is, you use a logical name for your "group
> of namenodes", and provide a means for the client to handle connecting to
> the currently active one.
>
> Example:
>
> <property>
>       <name>dfs.nameservices</name>
>       <value>MYCLUSTER</value>
>   </property>
>
>   <property>
>       <name>dfs.ha.namenodes.MYCLUSTER</name>
>       <value>nn1,nn2</value>
>   </property>
>
>   <property>
>       <name>dfs.namenode.rpc-address.MYCLUSTER.nn1</name>
>       <value>dnsOfNameNode1:8020</value>
>   </property>
>   <property>
>       <name>dfs.namenode.http-address.MYCLUSTER.nn1</name>
>       <value> dnsOfNameNode1:50070</value>
>   </property>
>
>   <property>
>       <name>dfs.namenode.rpc-address.MYCLUSTER.nn2</name>
>       <value> dnsOfNameNode2:8020</value>
>   </property>
>   <property>
>       <name>dfs.namenode.http-address.MYCLUSTER.nn2</name>
>       <value> dnsOfNameNode2:50070</value>
>   </property>
>
>   <property>
>       <name>dfs.client.failover.proxy.provider.gilbert-prod</name>
>
> <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
>   </property>
>
>
> On Tue, Apr 29, 2014 at 9:07 AM, sam liu <sa...@gmail.com> wrote:
>
>> Hi Experts,
>>
>> For example, at the beginning, the application will access NameNode using
>> IP of active NameNode(IP: 9.123.22.1).  However, after failover, the IP of
>> active NameNode is changed to 9.123.22.2 which is the IP of previous
>> standby NameNode. In this case, application must update NameNode IP?
>>
>> Thanks!
>>
>
>

Re: For QJM HA solution, after failover, application must update NameNode IP?

Posted by sam liu <sa...@gmail.com>.
Hi Bryan,

Thanks for your detailed response!

- 'you use a logical name for your "group of namenodes"': In your case, it
should be 'MYCLUSTER'

- 'provide a means for the client to handle connecting to the currently
active one': *Could you pls give an example?*




2014-04-29 21:57 GMT+08:00 Bryan Beaudreault <bb...@hubspot.com>:

> If you are using the QJM HA solution, the IP addresses of the namenodes
> should not change.  Instead your clients should be connecting using the
> proper HA configurations.  That is, you use a logical name for your "group
> of namenodes", and provide a means for the client to handle connecting to
> the currently active one.
>
> Example:
>
> <property>
>       <name>dfs.nameservices</name>
>       <value>MYCLUSTER</value>
>   </property>
>
>   <property>
>       <name>dfs.ha.namenodes.MYCLUSTER</name>
>       <value>nn1,nn2</value>
>   </property>
>
>   <property>
>       <name>dfs.namenode.rpc-address.MYCLUSTER.nn1</name>
>       <value>dnsOfNameNode1:8020</value>
>   </property>
>   <property>
>       <name>dfs.namenode.http-address.MYCLUSTER.nn1</name>
>       <value> dnsOfNameNode1:50070</value>
>   </property>
>
>   <property>
>       <name>dfs.namenode.rpc-address.MYCLUSTER.nn2</name>
>       <value> dnsOfNameNode2:8020</value>
>   </property>
>   <property>
>       <name>dfs.namenode.http-address.MYCLUSTER.nn2</name>
>       <value> dnsOfNameNode2:50070</value>
>   </property>
>
>   <property>
>       <name>dfs.client.failover.proxy.provider.gilbert-prod</name>
>
> <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
>   </property>
>
>
> On Tue, Apr 29, 2014 at 9:07 AM, sam liu <sa...@gmail.com> wrote:
>
>> Hi Experts,
>>
>> For example, at the beginning, the application will access NameNode using
>> IP of active NameNode(IP: 9.123.22.1).  However, after failover, the IP of
>> active NameNode is changed to 9.123.22.2 which is the IP of previous
>> standby NameNode. In this case, application must update NameNode IP?
>>
>> Thanks!
>>
>
>

Re: For QJM HA solution, after failover, application must update NameNode IP?

Posted by sam liu <sa...@gmail.com>.
Hi Bryan,

Thanks for your detailed response!

- 'you use a logical name for your "group of namenodes"': In your case, it
should be 'MYCLUSTER'

- 'provide a means for the client to handle connecting to the currently
active one': *Could you pls give an example?*




2014-04-29 21:57 GMT+08:00 Bryan Beaudreault <bb...@hubspot.com>:

> If you are using the QJM HA solution, the IP addresses of the namenodes
> should not change.  Instead your clients should be connecting using the
> proper HA configurations.  That is, you use a logical name for your "group
> of namenodes", and provide a means for the client to handle connecting to
> the currently active one.
>
> Example:
>
> <property>
>       <name>dfs.nameservices</name>
>       <value>MYCLUSTER</value>
>   </property>
>
>   <property>
>       <name>dfs.ha.namenodes.MYCLUSTER</name>
>       <value>nn1,nn2</value>
>   </property>
>
>   <property>
>       <name>dfs.namenode.rpc-address.MYCLUSTER.nn1</name>
>       <value>dnsOfNameNode1:8020</value>
>   </property>
>   <property>
>       <name>dfs.namenode.http-address.MYCLUSTER.nn1</name>
>       <value> dnsOfNameNode1:50070</value>
>   </property>
>
>   <property>
>       <name>dfs.namenode.rpc-address.MYCLUSTER.nn2</name>
>       <value> dnsOfNameNode2:8020</value>
>   </property>
>   <property>
>       <name>dfs.namenode.http-address.MYCLUSTER.nn2</name>
>       <value> dnsOfNameNode2:50070</value>
>   </property>
>
>   <property>
>       <name>dfs.client.failover.proxy.provider.gilbert-prod</name>
>
> <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
>   </property>
>
>
> On Tue, Apr 29, 2014 at 9:07 AM, sam liu <sa...@gmail.com> wrote:
>
>> Hi Experts,
>>
>> For example, at the beginning, the application will access NameNode using
>> IP of active NameNode(IP: 9.123.22.1).  However, after failover, the IP of
>> active NameNode is changed to 9.123.22.2 which is the IP of previous
>> standby NameNode. In this case, application must update NameNode IP?
>>
>> Thanks!
>>
>
>

Re: For QJM HA solution, after failover, application must update NameNode IP?

Posted by Bryan Beaudreault <bb...@hubspot.com>.
If you are using the QJM HA solution, the IP addresses of the namenodes
should not change.  Instead your clients should be connecting using the
proper HA configurations.  That is, you use a logical name for your "group
of namenodes", and provide a means for the client to handle connecting to
the currently active one.

Example:

<property>
      <name>dfs.nameservices</name>
      <value>MYCLUSTER</value>
  </property>

  <property>
      <name>dfs.ha.namenodes.MYCLUSTER</name>
      <value>nn1,nn2</value>
  </property>

  <property>
      <name>dfs.namenode.rpc-address.MYCLUSTER.nn1</name>
      <value>dnsOfNameNode1:8020</value>
  </property>
  <property>
      <name>dfs.namenode.http-address.MYCLUSTER.nn1</name>
      <value> dnsOfNameNode1:50070</value>
  </property>

  <property>
      <name>dfs.namenode.rpc-address.MYCLUSTER.nn2</name>
      <value> dnsOfNameNode2:8020</value>
  </property>
  <property>
      <name>dfs.namenode.http-address.MYCLUSTER.nn2</name>
      <value> dnsOfNameNode2:50070</value>
  </property>

  <property>
      <name>dfs.client.failover.proxy.provider.gilbert-prod</name>

<value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
  </property>


On Tue, Apr 29, 2014 at 9:07 AM, sam liu <sa...@gmail.com> wrote:

> Hi Experts,
>
> For example, at the beginning, the application will access NameNode using
> IP of active NameNode(IP: 9.123.22.1).  However, after failover, the IP of
> active NameNode is changed to 9.123.22.2 which is the IP of previous
> standby NameNode. In this case, application must update NameNode IP?
>
> Thanks!
>

Re: For QJM HA solution, after failover, application must update NameNode IP?

Posted by Bryan Beaudreault <bb...@hubspot.com>.
If you are using the QJM HA solution, the IP addresses of the namenodes
should not change.  Instead your clients should be connecting using the
proper HA configurations.  That is, you use a logical name for your "group
of namenodes", and provide a means for the client to handle connecting to
the currently active one.

Example:

<property>
      <name>dfs.nameservices</name>
      <value>MYCLUSTER</value>
  </property>

  <property>
      <name>dfs.ha.namenodes.MYCLUSTER</name>
      <value>nn1,nn2</value>
  </property>

  <property>
      <name>dfs.namenode.rpc-address.MYCLUSTER.nn1</name>
      <value>dnsOfNameNode1:8020</value>
  </property>
  <property>
      <name>dfs.namenode.http-address.MYCLUSTER.nn1</name>
      <value> dnsOfNameNode1:50070</value>
  </property>

  <property>
      <name>dfs.namenode.rpc-address.MYCLUSTER.nn2</name>
      <value> dnsOfNameNode2:8020</value>
  </property>
  <property>
      <name>dfs.namenode.http-address.MYCLUSTER.nn2</name>
      <value> dnsOfNameNode2:50070</value>
  </property>

  <property>
      <name>dfs.client.failover.proxy.provider.gilbert-prod</name>

<value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
  </property>


On Tue, Apr 29, 2014 at 9:07 AM, sam liu <sa...@gmail.com> wrote:

> Hi Experts,
>
> For example, at the beginning, the application will access NameNode using
> IP of active NameNode(IP: 9.123.22.1).  However, after failover, the IP of
> active NameNode is changed to 9.123.22.2 which is the IP of previous
> standby NameNode. In this case, application must update NameNode IP?
>
> Thanks!
>

Re: For QJM HA solution, after failover, application must update NameNode IP?

Posted by Bryan Beaudreault <bb...@hubspot.com>.
If you are using the QJM HA solution, the IP addresses of the namenodes
should not change.  Instead your clients should be connecting using the
proper HA configurations.  That is, you use a logical name for your "group
of namenodes", and provide a means for the client to handle connecting to
the currently active one.

Example:

<property>
      <name>dfs.nameservices</name>
      <value>MYCLUSTER</value>
  </property>

  <property>
      <name>dfs.ha.namenodes.MYCLUSTER</name>
      <value>nn1,nn2</value>
  </property>

  <property>
      <name>dfs.namenode.rpc-address.MYCLUSTER.nn1</name>
      <value>dnsOfNameNode1:8020</value>
  </property>
  <property>
      <name>dfs.namenode.http-address.MYCLUSTER.nn1</name>
      <value> dnsOfNameNode1:50070</value>
  </property>

  <property>
      <name>dfs.namenode.rpc-address.MYCLUSTER.nn2</name>
      <value> dnsOfNameNode2:8020</value>
  </property>
  <property>
      <name>dfs.namenode.http-address.MYCLUSTER.nn2</name>
      <value> dnsOfNameNode2:50070</value>
  </property>

  <property>
      <name>dfs.client.failover.proxy.provider.gilbert-prod</name>

<value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
  </property>


On Tue, Apr 29, 2014 at 9:07 AM, sam liu <sa...@gmail.com> wrote:

> Hi Experts,
>
> For example, at the beginning, the application will access NameNode using
> IP of active NameNode(IP: 9.123.22.1).  However, after failover, the IP of
> active NameNode is changed to 9.123.22.2 which is the IP of previous
> standby NameNode. In this case, application must update NameNode IP?
>
> Thanks!
>

Re: For QJM HA solution, after failover, application must update NameNode IP?

Posted by Bryan Beaudreault <bb...@hubspot.com>.
If you are using the QJM HA solution, the IP addresses of the namenodes
should not change.  Instead your clients should be connecting using the
proper HA configurations.  That is, you use a logical name for your "group
of namenodes", and provide a means for the client to handle connecting to
the currently active one.

Example:

<property>
      <name>dfs.nameservices</name>
      <value>MYCLUSTER</value>
  </property>

  <property>
      <name>dfs.ha.namenodes.MYCLUSTER</name>
      <value>nn1,nn2</value>
  </property>

  <property>
      <name>dfs.namenode.rpc-address.MYCLUSTER.nn1</name>
      <value>dnsOfNameNode1:8020</value>
  </property>
  <property>
      <name>dfs.namenode.http-address.MYCLUSTER.nn1</name>
      <value> dnsOfNameNode1:50070</value>
  </property>

  <property>
      <name>dfs.namenode.rpc-address.MYCLUSTER.nn2</name>
      <value> dnsOfNameNode2:8020</value>
  </property>
  <property>
      <name>dfs.namenode.http-address.MYCLUSTER.nn2</name>
      <value> dnsOfNameNode2:50070</value>
  </property>

  <property>
      <name>dfs.client.failover.proxy.provider.gilbert-prod</name>

<value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
  </property>


On Tue, Apr 29, 2014 at 9:07 AM, sam liu <sa...@gmail.com> wrote:

> Hi Experts,
>
> For example, at the beginning, the application will access NameNode using
> IP of active NameNode(IP: 9.123.22.1).  However, after failover, the IP of
> active NameNode is changed to 9.123.22.2 which is the IP of previous
> standby NameNode. In this case, application must update NameNode IP?
>
> Thanks!
>