You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@hawq.apache.org by Radar Lei <rl...@pivotal.io> on 2018/05/31 01:50:39 UTC

Re: how hawq use HA hdfs

If you are installing a new HAWQ, then file space move is not required.

I think  HAWQ will treat the host string as an url unless you configured
HAWQ hdfs HA correctly. So please verify if you missed any other steps.

Regards,
Radar

On Thu, May 31, 2018 at 9:40 AM, <xi...@sky-data.cn> wrote:

> I think that move filespace is the action that hawq has installed before
> HA hdfs,
> but in my case, i installed HA hdfs then installed hawq.
>
> In this way, i think there is no need to move filespace.
>
> Besides, my question is that why URI can not work as expected?
> Or which URI i should use?
>
> ------------------------------
> *From: *"Radar Lei" <rl...@pivotal.io>
> *To: *"user" <us...@hawq.incubator.apache.org>
> *Sent: *Wednesday, May 30, 2018 8:30:01 PM
> *Subject: *Re: how hawq use HA hdfs
>
> As the steps in the document you mentioned, you need to finish all the
> steps to convert your HAWQ cluster to HA mode.
> E.g. Do filespace move, make changes in hdfs-client.xml etc.
>
> Regards,
> Radar
>
> On Wed, May 30, 2018 at 7:44 PM, <xi...@sky-data.cn> wrote:
>
>> Hi!
>>
>> I has setting HA hdfs using ZKFC with below config:
>>
>> <configuration>
>>     <property>
>>         <name>fs.defaultFS</name>
>>         <value>hdfs://dx</value>
>>     </property>
>>  <property>
>>    <name>ha.zookeeper.quorum</name>
>>    <value>192.168.60.24 <callto:192.168.60.24>:2181,192.168.60.32
>> <callto:2181,192.168.60.32>:2181,192.168.60.37
>> <callto:2181,192.168.60.37>:2181</value>
>>  </property>
>> </configuration>
>>
>> Then i install hawq in another node(not any node installed hdfs).
>>
>> I follow http://hawq.incubator.apache.org/docs/userguide/2.3.0.0-
>> incubating/admin/HAWQFilespacesandHighAvailabilityEnabledHDFS.html
>> to set `hawq_dfs_url` as "dx/hawq_default", but init failed when check if
>> hdfs path is available:
>> [WARNING]:-ERROR: Can not connect to 'hdfs://dx:0'
>>
>> Then i change `hawq_dfs_url` to "dx:8020/hawq_default", still failed:
>> ERROR Failed to setup RPC connection to "dx:8020" caused by:
>> TcpSocket.cpp: 171: HdfsNetworkConnectException: Failed to resolve
>> address "dx:8020" Name or service not known
>>
>> It consider "dx" as hostname, which is not expected.
>>
>> I also test "hdfs://dx", "file://dx", all failed.
>>
>> I know that "192.168.60.24 <callto:192.168.60.24>:8020" can work, but it
>> does not use HA hdfs.
>>
>> How can hawq use HA hdfs?
>>
>> Thanks
>>
>
>
> --
> 戴翔
> 南京天数信息科技有限公司
> 电话: +86 1 <callto:+86%2015062274256>3382776490
> 公司官网: www.sky-data.cn
> 免费使用天数润科智能计算平台 SkyDiscovery
>

Re: how hawq use HA hdfs

Posted by xi...@sky-data.cn.
Thank you very much. 



From: "Radar Lei" <rl...@pivotal.io> 
To: "user" <us...@hawq.incubator.apache.org> 
Cc: "dev" <de...@hawq.incubator.apache.org> 
Sent: Thursday, May 31, 2018 11:46:09 AM 
Subject: Re: how hawq use HA hdfs 

Seems like you set the 'dfs.nameservices' as skydata, but not 'dx' which you defined in hawq-site.xml. 

Regards, 
Radar 

On Thu, May 31, 2018 at 11:22 AM, < xiang.dai@sky-data.cn > wrote: 



I had, here is related configs in hdfs-client.xml: 


<property> 
<name>dfs.nameservices</name> 
<value>skydata</value> 
</property> 

<property> 
<name>dfs.ha.namenodes.skydata</name> 
<value>nn1,nn2</value> 
</property> 

<property> 
<name>dfs.namenode.rpc-address.skydata.nn1</name> 
<value> 192.168.60.24:8020 </value> 
</property> 

<property> 
<name>dfs.namenode.rpc-address.skydata.nn2</name> 
<value> 192.168.60.32:8020 </value> 
</property> 

<property> 
<name>dfs.namenode.http-address.skydata.nn1</name> 
<value> 192.168.60.24:50070 </value> 
</property> 
<property> 
<name>dfs.namenode.http-address.skydata.nn2</name> 
<value> 192.168.60.32:50070 </value> 
</property> 



From: "Radar Lei" < rlei@pivotal.io > 
To: "user" < user@hawq.incubator.apache.org > 
Cc: "dev" < dev@hawq.incubator.apache.org > 
Sent: Thursday, May 31, 2018 10:27:21 AM 

Subject: Re: how hawq use HA hdfs 

Have you made changes to HAWQ configuration file ' hdfs-client.xml'? 

Regards, 
Radar 

On Thu, May 31, 2018 at 10:07 AM, < xiang.dai@sky-data.cn > wrote: 

BQ_BEGIN



Change the following parameter in the $GPHOME/etc/hawq-site.xml file: 
<property> <name> hawq_dfs_url </name> <value> hdpcluster/hawq_default </value> <description> URL for accessing HDFS. </description> </property> 


In the listing above: 

    * Replace hdpcluster with the actual service ID that is configured in HDFS. 
    * Replace /hawq_default with the directory you want to use for storing data on HDFS. Make sure this directory exists and is writable. 

In my case, i think that hdpcluster is "dx" which defined in core-site.xml. 

So i use "dx/hawq_default" but failed. 

Could anyone help me about this? 

Thanks very much. 



From: "Radar Lei" < rlei@pivotal.io > 
To: "user" < user@hawq.incubator.apache.org >, "dev" < dev@hawq.incubator.apache.org > 
Sent: Thursday, May 31, 2018 9:50:39 AM 

Subject: Re: how hawq use HA hdfs 

If you are installing a new HAWQ, then file space move is not required. 
I think HAWQ will treat the host string as an url unless you configured HAWQ hdfs HA correctly. So please verify if you missed any other steps. 

Regards, 
Radar 

On Thu, May 31, 2018 at 9:40 AM, < xiang.dai@sky-data.cn > wrote: 

BQ_BEGIN

I think that move filespace is the action that hawq has installed before HA hdfs, 
but in my case, i installed HA hdfs then installed hawq. 

In this way, i think there is no need to move filespace. 

Besides, my question is that why URI can not work as expected? 
Or which URI i should use? 


From: "Radar Lei" < rlei@pivotal.io > 
To: "user" < user@hawq.incubator.apache.org > 
Sent: Wednesday, May 30, 2018 8:30:01 PM 
Subject: Re: how hawq use HA hdfs 

As the steps in the document you mentioned, you need to finish all the steps to convert your HAWQ cluster to HA mode. 
E.g. Do filespace move, make changes in hdfs-client.xml etc. 

Regards, 
Radar 

On Wed, May 30, 2018 at 7:44 PM, < xiang.dai@sky-data.cn > wrote: 

BQ_BEGIN

Hi! 

I has setting HA hdfs using ZKFC with below config: 

<configuration> 
<property> 
<name>fs.defaultFS</name> 
<value>hdfs://dx</value> 
</property> 
<property> 
<name>ha.zookeeper.quorum</name> 
<value> 192.168.60.24 : 2181,192.168.60.32 : 2181,192.168.60.37 :2181</value> 
</property> 
</configuration> 

Then i install hawq in another node(not any node installed hdfs). 

I follow http://hawq.incubator.apache.org/docs/userguide/2.3.0.0-incubating/admin/HAWQFilespacesandHighAvailabilityEnabledHDFS.html 
to set `hawq_dfs_url` as "dx/hawq_default", but init failed when check if hdfs path is available: 
[WARNING]:-ERROR: Can not connect to 'hdfs://dx:0' 

Then i change `hawq_dfs_url` to "dx:8020/hawq_default", still failed: 
ERROR Failed to setup RPC connection to "dx:8020" caused by: 
TcpSocket.cpp: 171: HdfsNetworkConnectException: Failed to resolve address "dx:8020" Name or service not known 

It consider "dx" as hostname, which is not expected. 

I also test "hdfs://dx", " file://dx ", all failed. 

I know that " 192.168.60.24 :8020" can work, but it does not use HA hdfs. 

How can hawq use HA hdfs? 

Thanks 





-- 
戴翔 
南京天数信息科技有限公司 
电话: +86 1 3382776490 
公司官网: www.sky-data.cn 
免费使用天数润科智能计算平台 SkyDiscovery 

BQ_END



-- 
戴翔 
南京天数信息科技有限公司 
电话: +86 1 3382776490 
公司官网: www.sky-data.cn 
免费使用天数润科智能计算平台 SkyDiscovery 

BQ_END



-- 
戴翔 
南京天数信息科技有限公司 
电话: +86 1 3382776490 
公司官网: www.sky-data.cn 
免费使用天数润科智能计算平台 SkyDiscovery 

BQ_END



-- 
戴翔 
南京天数信息科技有限公司 
电话: +86 1 3382776490 
公司官网: www.sky-data.cn 
免费使用天数润科智能计算平台 SkyDiscovery 

Re: how hawq use HA hdfs

Posted by xi...@sky-data.cn.
Thank you very much. 



From: "Radar Lei" <rl...@pivotal.io> 
To: "user" <us...@hawq.incubator.apache.org> 
Cc: "dev" <de...@hawq.incubator.apache.org> 
Sent: Thursday, May 31, 2018 11:46:09 AM 
Subject: Re: how hawq use HA hdfs 

Seems like you set the 'dfs.nameservices' as skydata, but not 'dx' which you defined in hawq-site.xml. 

Regards, 
Radar 

On Thu, May 31, 2018 at 11:22 AM, < xiang.dai@sky-data.cn > wrote: 



I had, here is related configs in hdfs-client.xml: 


<property> 
<name>dfs.nameservices</name> 
<value>skydata</value> 
</property> 

<property> 
<name>dfs.ha.namenodes.skydata</name> 
<value>nn1,nn2</value> 
</property> 

<property> 
<name>dfs.namenode.rpc-address.skydata.nn1</name> 
<value> 192.168.60.24:8020 </value> 
</property> 

<property> 
<name>dfs.namenode.rpc-address.skydata.nn2</name> 
<value> 192.168.60.32:8020 </value> 
</property> 

<property> 
<name>dfs.namenode.http-address.skydata.nn1</name> 
<value> 192.168.60.24:50070 </value> 
</property> 
<property> 
<name>dfs.namenode.http-address.skydata.nn2</name> 
<value> 192.168.60.32:50070 </value> 
</property> 



From: "Radar Lei" < rlei@pivotal.io > 
To: "user" < user@hawq.incubator.apache.org > 
Cc: "dev" < dev@hawq.incubator.apache.org > 
Sent: Thursday, May 31, 2018 10:27:21 AM 

Subject: Re: how hawq use HA hdfs 

Have you made changes to HAWQ configuration file ' hdfs-client.xml'? 

Regards, 
Radar 

On Thu, May 31, 2018 at 10:07 AM, < xiang.dai@sky-data.cn > wrote: 

BQ_BEGIN



Change the following parameter in the $GPHOME/etc/hawq-site.xml file: 
<property> <name> hawq_dfs_url </name> <value> hdpcluster/hawq_default </value> <description> URL for accessing HDFS. </description> </property> 


In the listing above: 

    * Replace hdpcluster with the actual service ID that is configured in HDFS. 
    * Replace /hawq_default with the directory you want to use for storing data on HDFS. Make sure this directory exists and is writable. 

In my case, i think that hdpcluster is "dx" which defined in core-site.xml. 

So i use "dx/hawq_default" but failed. 

Could anyone help me about this? 

Thanks very much. 



From: "Radar Lei" < rlei@pivotal.io > 
To: "user" < user@hawq.incubator.apache.org >, "dev" < dev@hawq.incubator.apache.org > 
Sent: Thursday, May 31, 2018 9:50:39 AM 

Subject: Re: how hawq use HA hdfs 

If you are installing a new HAWQ, then file space move is not required. 
I think HAWQ will treat the host string as an url unless you configured HAWQ hdfs HA correctly. So please verify if you missed any other steps. 

Regards, 
Radar 

On Thu, May 31, 2018 at 9:40 AM, < xiang.dai@sky-data.cn > wrote: 

BQ_BEGIN

I think that move filespace is the action that hawq has installed before HA hdfs, 
but in my case, i installed HA hdfs then installed hawq. 

In this way, i think there is no need to move filespace. 

Besides, my question is that why URI can not work as expected? 
Or which URI i should use? 


From: "Radar Lei" < rlei@pivotal.io > 
To: "user" < user@hawq.incubator.apache.org > 
Sent: Wednesday, May 30, 2018 8:30:01 PM 
Subject: Re: how hawq use HA hdfs 

As the steps in the document you mentioned, you need to finish all the steps to convert your HAWQ cluster to HA mode. 
E.g. Do filespace move, make changes in hdfs-client.xml etc. 

Regards, 
Radar 

On Wed, May 30, 2018 at 7:44 PM, < xiang.dai@sky-data.cn > wrote: 

BQ_BEGIN

Hi! 

I has setting HA hdfs using ZKFC with below config: 

<configuration> 
<property> 
<name>fs.defaultFS</name> 
<value>hdfs://dx</value> 
</property> 
<property> 
<name>ha.zookeeper.quorum</name> 
<value> 192.168.60.24 : 2181,192.168.60.32 : 2181,192.168.60.37 :2181</value> 
</property> 
</configuration> 

Then i install hawq in another node(not any node installed hdfs). 

I follow http://hawq.incubator.apache.org/docs/userguide/2.3.0.0-incubating/admin/HAWQFilespacesandHighAvailabilityEnabledHDFS.html 
to set `hawq_dfs_url` as "dx/hawq_default", but init failed when check if hdfs path is available: 
[WARNING]:-ERROR: Can not connect to 'hdfs://dx:0' 

Then i change `hawq_dfs_url` to "dx:8020/hawq_default", still failed: 
ERROR Failed to setup RPC connection to "dx:8020" caused by: 
TcpSocket.cpp: 171: HdfsNetworkConnectException: Failed to resolve address "dx:8020" Name or service not known 

It consider "dx" as hostname, which is not expected. 

I also test "hdfs://dx", " file://dx ", all failed. 

I know that " 192.168.60.24 :8020" can work, but it does not use HA hdfs. 

How can hawq use HA hdfs? 

Thanks 





-- 
戴翔 
南京天数信息科技有限公司 
电话: +86 1 3382776490 
公司官网: www.sky-data.cn 
免费使用天数润科智能计算平台 SkyDiscovery 

BQ_END



-- 
戴翔 
南京天数信息科技有限公司 
电话: +86 1 3382776490 
公司官网: www.sky-data.cn 
免费使用天数润科智能计算平台 SkyDiscovery 

BQ_END



-- 
戴翔 
南京天数信息科技有限公司 
电话: +86 1 3382776490 
公司官网: www.sky-data.cn 
免费使用天数润科智能计算平台 SkyDiscovery 

BQ_END



-- 
戴翔 
南京天数信息科技有限公司 
电话: +86 1 3382776490 
公司官网: www.sky-data.cn 
免费使用天数润科智能计算平台 SkyDiscovery 

Re: how hawq use HA hdfs

Posted by Radar Lei <rl...@pivotal.io>.
Seems like you set the 'dfs.nameservices' as skydata, but not 'dx' which
you defined in hawq-site.xml.

Regards,
Radar

On Thu, May 31, 2018 at 11:22 AM, <xi...@sky-data.cn> wrote:

> I had, here is related configs in hdfs-client.xml:
>
>
>      <property>
>         <name>dfs.nameservices</name>
>         <value>skydata</value>
>         </property>
>
>         <property>
>         <name>dfs.ha.namenodes.skydata</name>
>         <value>nn1,nn2</value>
>         </property>
>
>         <property>
>         <name>dfs.namenode.rpc-address.skydata.nn1</name>
>         <value>192.168.60.24:8020</value>
>         </property>
>
>         <property>
>         <name>dfs.namenode.rpc-address.skydata.nn2</name>
>         <value>192.168.60.32:8020</value>
>         </property>
>
>         <property>
>         <name>dfs.namenode.http-address.skydata.nn1</name>
>         <value>192.168.60.24:50070</value>
>         </property>
>         <property>
>         <name>dfs.namenode.http-address.skydata.nn2</name>
>         <value>192.168.60.32:50070</value>
>         </property>
>
>
> ------------------------------
> *From: *"Radar Lei" <rl...@pivotal.io>
> *To: *"user" <us...@hawq.incubator.apache.org>
> *Cc: *"dev" <de...@hawq.incubator.apache.org>
> *Sent: *Thursday, May 31, 2018 10:27:21 AM
>
> *Subject: *Re: how hawq use HA hdfs
>
> Have you made changes to  HAWQ configuration file 'hdfs-client.xml'?
>
> Regards,
> Radar
>
> On Thu, May 31, 2018 at 10:07 AM, <xi...@sky-data.cn> wrote:
>
>> Change the following parameter in the $GPHOME/etc/hawq-site.xml file:
>>
>> <property>
>>     <name>hawq_dfs_url</name>
>>     <value>hdpcluster/hawq_default</value>
>>     <description>URL for accessing HDFS.</description></property>
>>
>> In the listing above:
>>
>>    - Replace hdpcluster with the actual service ID that is configured in
>>    HDFS.
>>    - Replace /hawq_default with the directory you want to use for
>>    storing data on HDFS. Make sure this directory exists and is writable.
>>
>> In my case, i think that hdpcluster is "dx" which defined in
>> core-site.xml.
>>
>> So i use "dx/hawq_default" but failed.
>>
>> Could anyone help me about this?
>>
>> Thanks very much.
>>
>>
>> ------------------------------
>> *From: *"Radar Lei" <rl...@pivotal.io>
>> *To: *"user" <us...@hawq.incubator.apache.org>, "dev" <
>> dev@hawq.incubator.apache.org>
>> *Sent: *Thursday, May 31, 2018 9:50:39 AM
>>
>> *Subject: *Re: how hawq use HA hdfs
>>
>> If you are installing a new HAWQ, then file space move is not required.
>> I think  HAWQ will treat the host string as an url unless you configured
>> HAWQ hdfs HA correctly. So please verify if you missed any other steps.
>>
>> Regards,
>> Radar
>>
>> On Thu, May 31, 2018 at 9:40 AM, <xi...@sky-data.cn> wrote:
>>
>>> I think that move filespace is the action that hawq has installed before
>>> HA hdfs,
>>> but in my case, i installed HA hdfs then installed hawq.
>>>
>>> In this way, i think there is no need to move filespace.
>>>
>>> Besides, my question is that why URI can not work as expected?
>>> Or which URI i should use?
>>>
>>> ------------------------------
>>> *From: *"Radar Lei" <rl...@pivotal.io>
>>> *To: *"user" <us...@hawq.incubator.apache.org>
>>> *Sent: *Wednesday, May 30, 2018 8:30:01 PM
>>> *Subject: *Re: how hawq use HA hdfs
>>>
>>> As the steps in the document you mentioned, you need to finish all the
>>> steps to convert your HAWQ cluster to HA mode.
>>> E.g. Do filespace move, make changes in hdfs-client.xml etc.
>>>
>>> Regards,
>>> Radar
>>>
>>> On Wed, May 30, 2018 at 7:44 PM, <xi...@sky-data.cn> wrote:
>>>
>>>> Hi!
>>>>
>>>> I has setting HA hdfs using ZKFC with below config:
>>>>
>>>> <configuration>
>>>>     <property>
>>>>         <name>fs.defaultFS</name>
>>>>         <value>hdfs://dx</value>
>>>>     </property>
>>>>  <property>
>>>>    <name>ha.zookeeper.quorum</name>
>>>>    <value>192.168.60.24 <callto:192.168.60.24>:2181,192.168.60.32
>>>> <callto:2181,192.168.60.32>:2181,192.168.60.37
>>>> <callto:2181,192.168.60.37>:2181</value>
>>>>  </property>
>>>> </configuration>
>>>>
>>>> Then i install hawq in another node(not any node installed hdfs).
>>>>
>>>> I follow http://hawq.incubator.apache.org/docs/userguide/2.3.0.0-
>>>> incubating/admin/HAWQFilespacesandHighAvailabilityEnabledHDFS.html
>>>> to set `hawq_dfs_url` as "dx/hawq_default", but init failed when check
>>>> if hdfs path is available:
>>>> [WARNING]:-ERROR: Can not connect to 'hdfs://dx:0'
>>>>
>>>> Then i change `hawq_dfs_url` to "dx:8020/hawq_default", still failed:
>>>> ERROR Failed to setup RPC connection to "dx:8020" caused by:
>>>> TcpSocket.cpp: 171: HdfsNetworkConnectException: Failed to resolve
>>>> address "dx:8020" Name or service not known
>>>>
>>>> It consider "dx" as hostname, which is not expected.
>>>>
>>>> I also test "hdfs://dx", "file://dx", all failed.
>>>>
>>>> I know that "192.168.60.24 <callto:192.168.60.24>:8020" can work, but
>>>> it does not use HA hdfs.
>>>>
>>>> How can hawq use HA hdfs?
>>>>
>>>> Thanks
>>>>
>>>
>>>
>>> --
>>> 戴翔
>>> 南京天数信息科技有限公司
>>> 电话: +86 1 <callto:+86%2015062274256>3382776490
>>> 公司官网: www.sky-data.cn
>>> 免费使用天数润科智能计算平台 SkyDiscovery
>>>
>>
>>
>> --
>> 戴翔
>> 南京天数信息科技有限公司
>> 电话: +86 1 <callto:+86%2015062274256>3382776490
>> 公司官网: www.sky-data.cn
>> 免费使用天数润科智能计算平台 SkyDiscovery
>>
>
>
> --
> 戴翔
> 南京天数信息科技有限公司
> 电话: +86 1 <callto:+86%2015062274256>3382776490
> 公司官网: www.sky-data.cn
> 免费使用天数润科智能计算平台 SkyDiscovery
>

Re: how hawq use HA hdfs

Posted by Radar Lei <rl...@pivotal.io>.
Seems like you set the 'dfs.nameservices' as skydata, but not 'dx' which
you defined in hawq-site.xml.

Regards,
Radar

On Thu, May 31, 2018 at 11:22 AM, <xi...@sky-data.cn> wrote:

> I had, here is related configs in hdfs-client.xml:
>
>
>      <property>
>         <name>dfs.nameservices</name>
>         <value>skydata</value>
>         </property>
>
>         <property>
>         <name>dfs.ha.namenodes.skydata</name>
>         <value>nn1,nn2</value>
>         </property>
>
>         <property>
>         <name>dfs.namenode.rpc-address.skydata.nn1</name>
>         <value>192.168.60.24:8020</value>
>         </property>
>
>         <property>
>         <name>dfs.namenode.rpc-address.skydata.nn2</name>
>         <value>192.168.60.32:8020</value>
>         </property>
>
>         <property>
>         <name>dfs.namenode.http-address.skydata.nn1</name>
>         <value>192.168.60.24:50070</value>
>         </property>
>         <property>
>         <name>dfs.namenode.http-address.skydata.nn2</name>
>         <value>192.168.60.32:50070</value>
>         </property>
>
>
> ------------------------------
> *From: *"Radar Lei" <rl...@pivotal.io>
> *To: *"user" <us...@hawq.incubator.apache.org>
> *Cc: *"dev" <de...@hawq.incubator.apache.org>
> *Sent: *Thursday, May 31, 2018 10:27:21 AM
>
> *Subject: *Re: how hawq use HA hdfs
>
> Have you made changes to  HAWQ configuration file 'hdfs-client.xml'?
>
> Regards,
> Radar
>
> On Thu, May 31, 2018 at 10:07 AM, <xi...@sky-data.cn> wrote:
>
>> Change the following parameter in the $GPHOME/etc/hawq-site.xml file:
>>
>> <property>
>>     <name>hawq_dfs_url</name>
>>     <value>hdpcluster/hawq_default</value>
>>     <description>URL for accessing HDFS.</description></property>
>>
>> In the listing above:
>>
>>    - Replace hdpcluster with the actual service ID that is configured in
>>    HDFS.
>>    - Replace /hawq_default with the directory you want to use for
>>    storing data on HDFS. Make sure this directory exists and is writable.
>>
>> In my case, i think that hdpcluster is "dx" which defined in
>> core-site.xml.
>>
>> So i use "dx/hawq_default" but failed.
>>
>> Could anyone help me about this?
>>
>> Thanks very much.
>>
>>
>> ------------------------------
>> *From: *"Radar Lei" <rl...@pivotal.io>
>> *To: *"user" <us...@hawq.incubator.apache.org>, "dev" <
>> dev@hawq.incubator.apache.org>
>> *Sent: *Thursday, May 31, 2018 9:50:39 AM
>>
>> *Subject: *Re: how hawq use HA hdfs
>>
>> If you are installing a new HAWQ, then file space move is not required.
>> I think  HAWQ will treat the host string as an url unless you configured
>> HAWQ hdfs HA correctly. So please verify if you missed any other steps.
>>
>> Regards,
>> Radar
>>
>> On Thu, May 31, 2018 at 9:40 AM, <xi...@sky-data.cn> wrote:
>>
>>> I think that move filespace is the action that hawq has installed before
>>> HA hdfs,
>>> but in my case, i installed HA hdfs then installed hawq.
>>>
>>> In this way, i think there is no need to move filespace.
>>>
>>> Besides, my question is that why URI can not work as expected?
>>> Or which URI i should use?
>>>
>>> ------------------------------
>>> *From: *"Radar Lei" <rl...@pivotal.io>
>>> *To: *"user" <us...@hawq.incubator.apache.org>
>>> *Sent: *Wednesday, May 30, 2018 8:30:01 PM
>>> *Subject: *Re: how hawq use HA hdfs
>>>
>>> As the steps in the document you mentioned, you need to finish all the
>>> steps to convert your HAWQ cluster to HA mode.
>>> E.g. Do filespace move, make changes in hdfs-client.xml etc.
>>>
>>> Regards,
>>> Radar
>>>
>>> On Wed, May 30, 2018 at 7:44 PM, <xi...@sky-data.cn> wrote:
>>>
>>>> Hi!
>>>>
>>>> I has setting HA hdfs using ZKFC with below config:
>>>>
>>>> <configuration>
>>>>     <property>
>>>>         <name>fs.defaultFS</name>
>>>>         <value>hdfs://dx</value>
>>>>     </property>
>>>>  <property>
>>>>    <name>ha.zookeeper.quorum</name>
>>>>    <value>192.168.60.24 <callto:192.168.60.24>:2181,192.168.60.32
>>>> <callto:2181,192.168.60.32>:2181,192.168.60.37
>>>> <callto:2181,192.168.60.37>:2181</value>
>>>>  </property>
>>>> </configuration>
>>>>
>>>> Then i install hawq in another node(not any node installed hdfs).
>>>>
>>>> I follow http://hawq.incubator.apache.org/docs/userguide/2.3.0.0-
>>>> incubating/admin/HAWQFilespacesandHighAvailabilityEnabledHDFS.html
>>>> to set `hawq_dfs_url` as "dx/hawq_default", but init failed when check
>>>> if hdfs path is available:
>>>> [WARNING]:-ERROR: Can not connect to 'hdfs://dx:0'
>>>>
>>>> Then i change `hawq_dfs_url` to "dx:8020/hawq_default", still failed:
>>>> ERROR Failed to setup RPC connection to "dx:8020" caused by:
>>>> TcpSocket.cpp: 171: HdfsNetworkConnectException: Failed to resolve
>>>> address "dx:8020" Name or service not known
>>>>
>>>> It consider "dx" as hostname, which is not expected.
>>>>
>>>> I also test "hdfs://dx", "file://dx", all failed.
>>>>
>>>> I know that "192.168.60.24 <callto:192.168.60.24>:8020" can work, but
>>>> it does not use HA hdfs.
>>>>
>>>> How can hawq use HA hdfs?
>>>>
>>>> Thanks
>>>>
>>>
>>>
>>> --
>>> 戴翔
>>> 南京天数信息科技有限公司
>>> 电话: +86 1 <callto:+86%2015062274256>3382776490
>>> 公司官网: www.sky-data.cn
>>> 免费使用天数润科智能计算平台 SkyDiscovery
>>>
>>
>>
>> --
>> 戴翔
>> 南京天数信息科技有限公司
>> 电话: +86 1 <callto:+86%2015062274256>3382776490
>> 公司官网: www.sky-data.cn
>> 免费使用天数润科智能计算平台 SkyDiscovery
>>
>
>
> --
> 戴翔
> 南京天数信息科技有限公司
> 电话: +86 1 <callto:+86%2015062274256>3382776490
> 公司官网: www.sky-data.cn
> 免费使用天数润科智能计算平台 SkyDiscovery
>

Re: how hawq use HA hdfs

Posted by xi...@sky-data.cn.
I had, here is related configs in hdfs-client.xml: 


<property> 
<name>dfs.nameservices</name> 
<value>skydata</value> 
</property> 

<property> 
<name>dfs.ha.namenodes.skydata</name> 
<value>nn1,nn2</value> 
</property> 

<property> 
<name>dfs.namenode.rpc-address.skydata.nn1</name> 
<value>192.168.60.24:8020</value> 
</property> 

<property> 
<name>dfs.namenode.rpc-address.skydata.nn2</name> 
<value>192.168.60.32:8020</value> 
</property> 

<property> 
<name>dfs.namenode.http-address.skydata.nn1</name> 
<value>192.168.60.24:50070</value> 
</property> 
<property> 
<name>dfs.namenode.http-address.skydata.nn2</name> 
<value>192.168.60.32:50070</value> 
</property> 



From: "Radar Lei" <rl...@pivotal.io> 
To: "user" <us...@hawq.incubator.apache.org> 
Cc: "dev" <de...@hawq.incubator.apache.org> 
Sent: Thursday, May 31, 2018 10:27:21 AM 
Subject: Re: how hawq use HA hdfs 

Have you made changes to HAWQ configuration file ' hdfs-client.xml'? 

Regards, 
Radar 

On Thu, May 31, 2018 at 10:07 AM, < xiang.dai@sky-data.cn > wrote: 





Change the following parameter in the $GPHOME/etc/hawq-site.xml file: 
<property> <name> hawq_dfs_url </name> <value> hdpcluster/hawq_default </value> <description> URL for accessing HDFS. </description> </property> 


In the listing above: 

    * Replace hdpcluster with the actual service ID that is configured in HDFS. 
    * Replace /hawq_default with the directory you want to use for storing data on HDFS. Make sure this directory exists and is writable. 

In my case, i think that hdpcluster is "dx" which defined in core-site.xml. 

So i use "dx/hawq_default" but failed. 

Could anyone help me about this? 

Thanks very much. 



From: "Radar Lei" < rlei@pivotal.io > 
To: "user" < user@hawq.incubator.apache.org >, "dev" < dev@hawq.incubator.apache.org > 
Sent: Thursday, May 31, 2018 9:50:39 AM 

Subject: Re: how hawq use HA hdfs 

If you are installing a new HAWQ, then file space move is not required. 
I think HAWQ will treat the host string as an url unless you configured HAWQ hdfs HA correctly. So please verify if you missed any other steps. 

Regards, 
Radar 

On Thu, May 31, 2018 at 9:40 AM, < xiang.dai@sky-data.cn > wrote: 

BQ_BEGIN

I think that move filespace is the action that hawq has installed before HA hdfs, 
but in my case, i installed HA hdfs then installed hawq. 

In this way, i think there is no need to move filespace. 

Besides, my question is that why URI can not work as expected? 
Or which URI i should use? 


From: "Radar Lei" < rlei@pivotal.io > 
To: "user" < user@hawq.incubator.apache.org > 
Sent: Wednesday, May 30, 2018 8:30:01 PM 
Subject: Re: how hawq use HA hdfs 

As the steps in the document you mentioned, you need to finish all the steps to convert your HAWQ cluster to HA mode. 
E.g. Do filespace move, make changes in hdfs-client.xml etc. 

Regards, 
Radar 

On Wed, May 30, 2018 at 7:44 PM, < xiang.dai@sky-data.cn > wrote: 

BQ_BEGIN

Hi! 

I has setting HA hdfs using ZKFC with below config: 

<configuration> 
<property> 
<name>fs.defaultFS</name> 
<value>hdfs://dx</value> 
</property> 
<property> 
<name>ha.zookeeper.quorum</name> 
<value> 192.168.60.24 : 2181,192.168.60.32 : 2181,192.168.60.37 :2181</value> 
</property> 
</configuration> 

Then i install hawq in another node(not any node installed hdfs). 

I follow http://hawq.incubator.apache.org/docs/userguide/2.3.0.0-incubating/admin/HAWQFilespacesandHighAvailabilityEnabledHDFS.html 
to set `hawq_dfs_url` as "dx/hawq_default", but init failed when check if hdfs path is available: 
[WARNING]:-ERROR: Can not connect to 'hdfs://dx:0' 

Then i change `hawq_dfs_url` to "dx:8020/hawq_default", still failed: 
ERROR Failed to setup RPC connection to "dx:8020" caused by: 
TcpSocket.cpp: 171: HdfsNetworkConnectException: Failed to resolve address "dx:8020" Name or service not known 

It consider "dx" as hostname, which is not expected. 

I also test "hdfs://dx", " file://dx ", all failed. 

I know that " 192.168.60.24 :8020" can work, but it does not use HA hdfs. 

How can hawq use HA hdfs? 

Thanks 





-- 
戴翔 
南京天数信息科技有限公司 
电话: +86 1 3382776490 
公司官网: www.sky-data.cn 
免费使用天数润科智能计算平台 SkyDiscovery 

BQ_END



-- 
戴翔 
南京天数信息科技有限公司 
电话: +86 1 3382776490 
公司官网: www.sky-data.cn 
免费使用天数润科智能计算平台 SkyDiscovery 

BQ_END



-- 
戴翔 
南京天数信息科技有限公司 
电话: +86 1 3382776490 
公司官网: www.sky-data.cn 
免费使用天数润科智能计算平台 SkyDiscovery 

Re: how hawq use HA hdfs

Posted by xi...@sky-data.cn.
I had, here is related configs in hdfs-client.xml: 


<property> 
<name>dfs.nameservices</name> 
<value>skydata</value> 
</property> 

<property> 
<name>dfs.ha.namenodes.skydata</name> 
<value>nn1,nn2</value> 
</property> 

<property> 
<name>dfs.namenode.rpc-address.skydata.nn1</name> 
<value>192.168.60.24:8020</value> 
</property> 

<property> 
<name>dfs.namenode.rpc-address.skydata.nn2</name> 
<value>192.168.60.32:8020</value> 
</property> 

<property> 
<name>dfs.namenode.http-address.skydata.nn1</name> 
<value>192.168.60.24:50070</value> 
</property> 
<property> 
<name>dfs.namenode.http-address.skydata.nn2</name> 
<value>192.168.60.32:50070</value> 
</property> 



From: "Radar Lei" <rl...@pivotal.io> 
To: "user" <us...@hawq.incubator.apache.org> 
Cc: "dev" <de...@hawq.incubator.apache.org> 
Sent: Thursday, May 31, 2018 10:27:21 AM 
Subject: Re: how hawq use HA hdfs 

Have you made changes to HAWQ configuration file ' hdfs-client.xml'? 

Regards, 
Radar 

On Thu, May 31, 2018 at 10:07 AM, < xiang.dai@sky-data.cn > wrote: 





Change the following parameter in the $GPHOME/etc/hawq-site.xml file: 
<property> <name> hawq_dfs_url </name> <value> hdpcluster/hawq_default </value> <description> URL for accessing HDFS. </description> </property> 


In the listing above: 

    * Replace hdpcluster with the actual service ID that is configured in HDFS. 
    * Replace /hawq_default with the directory you want to use for storing data on HDFS. Make sure this directory exists and is writable. 

In my case, i think that hdpcluster is "dx" which defined in core-site.xml. 

So i use "dx/hawq_default" but failed. 

Could anyone help me about this? 

Thanks very much. 



From: "Radar Lei" < rlei@pivotal.io > 
To: "user" < user@hawq.incubator.apache.org >, "dev" < dev@hawq.incubator.apache.org > 
Sent: Thursday, May 31, 2018 9:50:39 AM 

Subject: Re: how hawq use HA hdfs 

If you are installing a new HAWQ, then file space move is not required. 
I think HAWQ will treat the host string as an url unless you configured HAWQ hdfs HA correctly. So please verify if you missed any other steps. 

Regards, 
Radar 

On Thu, May 31, 2018 at 9:40 AM, < xiang.dai@sky-data.cn > wrote: 

BQ_BEGIN

I think that move filespace is the action that hawq has installed before HA hdfs, 
but in my case, i installed HA hdfs then installed hawq. 

In this way, i think there is no need to move filespace. 

Besides, my question is that why URI can not work as expected? 
Or which URI i should use? 


From: "Radar Lei" < rlei@pivotal.io > 
To: "user" < user@hawq.incubator.apache.org > 
Sent: Wednesday, May 30, 2018 8:30:01 PM 
Subject: Re: how hawq use HA hdfs 

As the steps in the document you mentioned, you need to finish all the steps to convert your HAWQ cluster to HA mode. 
E.g. Do filespace move, make changes in hdfs-client.xml etc. 

Regards, 
Radar 

On Wed, May 30, 2018 at 7:44 PM, < xiang.dai@sky-data.cn > wrote: 

BQ_BEGIN

Hi! 

I has setting HA hdfs using ZKFC with below config: 

<configuration> 
<property> 
<name>fs.defaultFS</name> 
<value>hdfs://dx</value> 
</property> 
<property> 
<name>ha.zookeeper.quorum</name> 
<value> 192.168.60.24 : 2181,192.168.60.32 : 2181,192.168.60.37 :2181</value> 
</property> 
</configuration> 

Then i install hawq in another node(not any node installed hdfs). 

I follow http://hawq.incubator.apache.org/docs/userguide/2.3.0.0-incubating/admin/HAWQFilespacesandHighAvailabilityEnabledHDFS.html 
to set `hawq_dfs_url` as "dx/hawq_default", but init failed when check if hdfs path is available: 
[WARNING]:-ERROR: Can not connect to 'hdfs://dx:0' 

Then i change `hawq_dfs_url` to "dx:8020/hawq_default", still failed: 
ERROR Failed to setup RPC connection to "dx:8020" caused by: 
TcpSocket.cpp: 171: HdfsNetworkConnectException: Failed to resolve address "dx:8020" Name or service not known 

It consider "dx" as hostname, which is not expected. 

I also test "hdfs://dx", " file://dx ", all failed. 

I know that " 192.168.60.24 :8020" can work, but it does not use HA hdfs. 

How can hawq use HA hdfs? 

Thanks 





-- 
戴翔 
南京天数信息科技有限公司 
电话: +86 1 3382776490 
公司官网: www.sky-data.cn 
免费使用天数润科智能计算平台 SkyDiscovery 

BQ_END



-- 
戴翔 
南京天数信息科技有限公司 
电话: +86 1 3382776490 
公司官网: www.sky-data.cn 
免费使用天数润科智能计算平台 SkyDiscovery 

BQ_END



-- 
戴翔 
南京天数信息科技有限公司 
电话: +86 1 3382776490 
公司官网: www.sky-data.cn 
免费使用天数润科智能计算平台 SkyDiscovery 

Re: how hawq use HA hdfs

Posted by Radar Lei <rl...@pivotal.io>.
Have you made changes to  HAWQ configuration file 'hdfs-client.xml'?

Regards,
Radar

On Thu, May 31, 2018 at 10:07 AM, <xi...@sky-data.cn> wrote:

> Change the following parameter in the $GPHOME/etc/hawq-site.xml file:
>
> <property>
>     <name>hawq_dfs_url</name>
>     <value>hdpcluster/hawq_default</value>
>     <description>URL for accessing HDFS.</description></property>
>
> In the listing above:
>
>    - Replace hdpcluster with the actual service ID that is configured in
>    HDFS.
>    - Replace /hawq_default with the directory you want to use for storing
>    data on HDFS. Make sure this directory exists and is writable.
>
> In my case, i think that hdpcluster is "dx" which defined in
> core-site.xml.
>
> So i use "dx/hawq_default" but failed.
>
> Could anyone help me about this?
>
> Thanks very much.
>
>
> ------------------------------
> *From: *"Radar Lei" <rl...@pivotal.io>
> *To: *"user" <us...@hawq.incubator.apache.org>, "dev" <
> dev@hawq.incubator.apache.org>
> *Sent: *Thursday, May 31, 2018 9:50:39 AM
>
> *Subject: *Re: how hawq use HA hdfs
>
> If you are installing a new HAWQ, then file space move is not required.
> I think  HAWQ will treat the host string as an url unless you configured
> HAWQ hdfs HA correctly. So please verify if you missed any other steps.
>
> Regards,
> Radar
>
> On Thu, May 31, 2018 at 9:40 AM, <xi...@sky-data.cn> wrote:
>
>> I think that move filespace is the action that hawq has installed before
>> HA hdfs,
>> but in my case, i installed HA hdfs then installed hawq.
>>
>> In this way, i think there is no need to move filespace.
>>
>> Besides, my question is that why URI can not work as expected?
>> Or which URI i should use?
>>
>> ------------------------------
>> *From: *"Radar Lei" <rl...@pivotal.io>
>> *To: *"user" <us...@hawq.incubator.apache.org>
>> *Sent: *Wednesday, May 30, 2018 8:30:01 PM
>> *Subject: *Re: how hawq use HA hdfs
>>
>> As the steps in the document you mentioned, you need to finish all the
>> steps to convert your HAWQ cluster to HA mode.
>> E.g. Do filespace move, make changes in hdfs-client.xml etc.
>>
>> Regards,
>> Radar
>>
>> On Wed, May 30, 2018 at 7:44 PM, <xi...@sky-data.cn> wrote:
>>
>>> Hi!
>>>
>>> I has setting HA hdfs using ZKFC with below config:
>>>
>>> <configuration>
>>>     <property>
>>>         <name>fs.defaultFS</name>
>>>         <value>hdfs://dx</value>
>>>     </property>
>>>  <property>
>>>    <name>ha.zookeeper.quorum</name>
>>>    <value>192.168.60.24 <callto:192.168.60.24>:2181,192.168.60.32
>>> <callto:2181,192.168.60.32>:2181,192.168.60.37
>>> <callto:2181,192.168.60.37>:2181</value>
>>>  </property>
>>> </configuration>
>>>
>>> Then i install hawq in another node(not any node installed hdfs).
>>>
>>> I follow http://hawq.incubator.apache.org/docs/userguide/2.3.0.0-
>>> incubating/admin/HAWQFilespacesandHighAvailabilityEnabledHDFS.html
>>> to set `hawq_dfs_url` as "dx/hawq_default", but init failed when check
>>> if hdfs path is available:
>>> [WARNING]:-ERROR: Can not connect to 'hdfs://dx:0'
>>>
>>> Then i change `hawq_dfs_url` to "dx:8020/hawq_default", still failed:
>>> ERROR Failed to setup RPC connection to "dx:8020" caused by:
>>> TcpSocket.cpp: 171: HdfsNetworkConnectException: Failed to resolve
>>> address "dx:8020" Name or service not known
>>>
>>> It consider "dx" as hostname, which is not expected.
>>>
>>> I also test "hdfs://dx", "file://dx", all failed.
>>>
>>> I know that "192.168.60.24 <callto:192.168.60.24>:8020" can work, but
>>> it does not use HA hdfs.
>>>
>>> How can hawq use HA hdfs?
>>>
>>> Thanks
>>>
>>
>>
>> --
>> 戴翔
>> 南京天数信息科技有限公司
>> 电话: +86 1 <callto:+86%2015062274256>3382776490
>> 公司官网: www.sky-data.cn
>> 免费使用天数润科智能计算平台 SkyDiscovery
>>
>
>
> --
> 戴翔
> 南京天数信息科技有限公司
> 电话: +86 1 <callto:+86%2015062274256>3382776490
> 公司官网: www.sky-data.cn
> 免费使用天数润科智能计算平台 SkyDiscovery
>

Re: how hawq use HA hdfs

Posted by Radar Lei <rl...@pivotal.io>.
Have you made changes to  HAWQ configuration file 'hdfs-client.xml'?

Regards,
Radar

On Thu, May 31, 2018 at 10:07 AM, <xi...@sky-data.cn> wrote:

> Change the following parameter in the $GPHOME/etc/hawq-site.xml file:
>
> <property>
>     <name>hawq_dfs_url</name>
>     <value>hdpcluster/hawq_default</value>
>     <description>URL for accessing HDFS.</description></property>
>
> In the listing above:
>
>    - Replace hdpcluster with the actual service ID that is configured in
>    HDFS.
>    - Replace /hawq_default with the directory you want to use for storing
>    data on HDFS. Make sure this directory exists and is writable.
>
> In my case, i think that hdpcluster is "dx" which defined in
> core-site.xml.
>
> So i use "dx/hawq_default" but failed.
>
> Could anyone help me about this?
>
> Thanks very much.
>
>
> ------------------------------
> *From: *"Radar Lei" <rl...@pivotal.io>
> *To: *"user" <us...@hawq.incubator.apache.org>, "dev" <
> dev@hawq.incubator.apache.org>
> *Sent: *Thursday, May 31, 2018 9:50:39 AM
>
> *Subject: *Re: how hawq use HA hdfs
>
> If you are installing a new HAWQ, then file space move is not required.
> I think  HAWQ will treat the host string as an url unless you configured
> HAWQ hdfs HA correctly. So please verify if you missed any other steps.
>
> Regards,
> Radar
>
> On Thu, May 31, 2018 at 9:40 AM, <xi...@sky-data.cn> wrote:
>
>> I think that move filespace is the action that hawq has installed before
>> HA hdfs,
>> but in my case, i installed HA hdfs then installed hawq.
>>
>> In this way, i think there is no need to move filespace.
>>
>> Besides, my question is that why URI can not work as expected?
>> Or which URI i should use?
>>
>> ------------------------------
>> *From: *"Radar Lei" <rl...@pivotal.io>
>> *To: *"user" <us...@hawq.incubator.apache.org>
>> *Sent: *Wednesday, May 30, 2018 8:30:01 PM
>> *Subject: *Re: how hawq use HA hdfs
>>
>> As the steps in the document you mentioned, you need to finish all the
>> steps to convert your HAWQ cluster to HA mode.
>> E.g. Do filespace move, make changes in hdfs-client.xml etc.
>>
>> Regards,
>> Radar
>>
>> On Wed, May 30, 2018 at 7:44 PM, <xi...@sky-data.cn> wrote:
>>
>>> Hi!
>>>
>>> I has setting HA hdfs using ZKFC with below config:
>>>
>>> <configuration>
>>>     <property>
>>>         <name>fs.defaultFS</name>
>>>         <value>hdfs://dx</value>
>>>     </property>
>>>  <property>
>>>    <name>ha.zookeeper.quorum</name>
>>>    <value>192.168.60.24 <callto:192.168.60.24>:2181,192.168.60.32
>>> <callto:2181,192.168.60.32>:2181,192.168.60.37
>>> <callto:2181,192.168.60.37>:2181</value>
>>>  </property>
>>> </configuration>
>>>
>>> Then i install hawq in another node(not any node installed hdfs).
>>>
>>> I follow http://hawq.incubator.apache.org/docs/userguide/2.3.0.0-
>>> incubating/admin/HAWQFilespacesandHighAvailabilityEnabledHDFS.html
>>> to set `hawq_dfs_url` as "dx/hawq_default", but init failed when check
>>> if hdfs path is available:
>>> [WARNING]:-ERROR: Can not connect to 'hdfs://dx:0'
>>>
>>> Then i change `hawq_dfs_url` to "dx:8020/hawq_default", still failed:
>>> ERROR Failed to setup RPC connection to "dx:8020" caused by:
>>> TcpSocket.cpp: 171: HdfsNetworkConnectException: Failed to resolve
>>> address "dx:8020" Name or service not known
>>>
>>> It consider "dx" as hostname, which is not expected.
>>>
>>> I also test "hdfs://dx", "file://dx", all failed.
>>>
>>> I know that "192.168.60.24 <callto:192.168.60.24>:8020" can work, but
>>> it does not use HA hdfs.
>>>
>>> How can hawq use HA hdfs?
>>>
>>> Thanks
>>>
>>
>>
>> --
>> 戴翔
>> 南京天数信息科技有限公司
>> 电话: +86 1 <callto:+86%2015062274256>3382776490
>> 公司官网: www.sky-data.cn
>> 免费使用天数润科智能计算平台 SkyDiscovery
>>
>
>
> --
> 戴翔
> 南京天数信息科技有限公司
> 电话: +86 1 <callto:+86%2015062274256>3382776490
> 公司官网: www.sky-data.cn
> 免费使用天数润科智能计算平台 SkyDiscovery
>

Re: how hawq use HA hdfs

Posted by xi...@sky-data.cn.

Change the following parameter in the $GPHOME/etc/hawq-site.xml file: 
<property> <name> hawq_dfs_url </name> <value> hdpcluster/hawq_default </value> <description> URL for accessing HDFS. </description> </property> 


In the listing above: 

    * Replace hdpcluster with the actual service ID that is configured in HDFS. 
    * Replace /hawq_default with the directory you want to use for storing data on HDFS. Make sure this directory exists and is writable. 

In my case, i think that hdpcluster is "dx" which defined in core-site.xml. 

So i use "dx/hawq_default" but failed. 

Could anyone help me about this? 

Thanks very much. 



From: "Radar Lei" <rl...@pivotal.io> 
To: "user" <us...@hawq.incubator.apache.org>, "dev" <de...@hawq.incubator.apache.org> 
Sent: Thursday, May 31, 2018 9:50:39 AM 
Subject: Re: how hawq use HA hdfs 

If you are installing a new HAWQ, then file space move is not required. 
I think HAWQ will treat the host string as an url unless you configured HAWQ hdfs HA correctly. So please verify if you missed any other steps. 

Regards, 
Radar 

On Thu, May 31, 2018 at 9:40 AM, < xiang.dai@sky-data.cn > wrote: 



I think that move filespace is the action that hawq has installed before HA hdfs, 
but in my case, i installed HA hdfs then installed hawq. 

In this way, i think there is no need to move filespace. 

Besides, my question is that why URI can not work as expected? 
Or which URI i should use? 


From: "Radar Lei" < rlei@pivotal.io > 
To: "user" < user@hawq.incubator.apache.org > 
Sent: Wednesday, May 30, 2018 8:30:01 PM 
Subject: Re: how hawq use HA hdfs 

As the steps in the document you mentioned, you need to finish all the steps to convert your HAWQ cluster to HA mode. 
E.g. Do filespace move, make changes in hdfs-client.xml etc. 

Regards, 
Radar 

On Wed, May 30, 2018 at 7:44 PM, < xiang.dai@sky-data.cn > wrote: 

BQ_BEGIN

Hi! 

I has setting HA hdfs using ZKFC with below config: 

<configuration> 
<property> 
<name>fs.defaultFS</name> 
<value>hdfs://dx</value> 
</property> 
<property> 
<name>ha.zookeeper.quorum</name> 
<value> 192.168.60.24 : 2181,192.168.60.32 : 2181,192.168.60.37 :2181</value> 
</property> 
</configuration> 

Then i install hawq in another node(not any node installed hdfs). 

I follow http://hawq.incubator.apache.org/docs/userguide/2.3.0.0-incubating/admin/HAWQFilespacesandHighAvailabilityEnabledHDFS.html 
to set `hawq_dfs_url` as "dx/hawq_default", but init failed when check if hdfs path is available: 
[WARNING]:-ERROR: Can not connect to 'hdfs://dx:0' 

Then i change `hawq_dfs_url` to "dx:8020/hawq_default", still failed: 
ERROR Failed to setup RPC connection to "dx:8020" caused by: 
TcpSocket.cpp: 171: HdfsNetworkConnectException: Failed to resolve address "dx:8020" Name or service not known 

It consider "dx" as hostname, which is not expected. 

I also test "hdfs://dx", " file://dx ", all failed. 

I know that " 192.168.60.24 :8020" can work, but it does not use HA hdfs. 

How can hawq use HA hdfs? 

Thanks 





-- 
戴翔 
南京天数信息科技有限公司 
电话: +86 1 3382776490 
公司官网: www.sky-data.cn 
免费使用天数润科智能计算平台 SkyDiscovery 

BQ_END



-- 
戴翔 
南京天数信息科技有限公司 
电话: +86 1 3382776490 
公司官网: www.sky-data.cn 
免费使用天数润科智能计算平台 SkyDiscovery 

Re: how hawq use HA hdfs

Posted by xi...@sky-data.cn.

Change the following parameter in the $GPHOME/etc/hawq-site.xml file: 
<property> <name> hawq_dfs_url </name> <value> hdpcluster/hawq_default </value> <description> URL for accessing HDFS. </description> </property> 


In the listing above: 

    * Replace hdpcluster with the actual service ID that is configured in HDFS. 
    * Replace /hawq_default with the directory you want to use for storing data on HDFS. Make sure this directory exists and is writable. 

In my case, i think that hdpcluster is "dx" which defined in core-site.xml. 

So i use "dx/hawq_default" but failed. 

Could anyone help me about this? 

Thanks very much. 



From: "Radar Lei" <rl...@pivotal.io> 
To: "user" <us...@hawq.incubator.apache.org>, "dev" <de...@hawq.incubator.apache.org> 
Sent: Thursday, May 31, 2018 9:50:39 AM 
Subject: Re: how hawq use HA hdfs 

If you are installing a new HAWQ, then file space move is not required. 
I think HAWQ will treat the host string as an url unless you configured HAWQ hdfs HA correctly. So please verify if you missed any other steps. 

Regards, 
Radar 

On Thu, May 31, 2018 at 9:40 AM, < xiang.dai@sky-data.cn > wrote: 



I think that move filespace is the action that hawq has installed before HA hdfs, 
but in my case, i installed HA hdfs then installed hawq. 

In this way, i think there is no need to move filespace. 

Besides, my question is that why URI can not work as expected? 
Or which URI i should use? 


From: "Radar Lei" < rlei@pivotal.io > 
To: "user" < user@hawq.incubator.apache.org > 
Sent: Wednesday, May 30, 2018 8:30:01 PM 
Subject: Re: how hawq use HA hdfs 

As the steps in the document you mentioned, you need to finish all the steps to convert your HAWQ cluster to HA mode. 
E.g. Do filespace move, make changes in hdfs-client.xml etc. 

Regards, 
Radar 

On Wed, May 30, 2018 at 7:44 PM, < xiang.dai@sky-data.cn > wrote: 

BQ_BEGIN

Hi! 

I has setting HA hdfs using ZKFC with below config: 

<configuration> 
<property> 
<name>fs.defaultFS</name> 
<value>hdfs://dx</value> 
</property> 
<property> 
<name>ha.zookeeper.quorum</name> 
<value> 192.168.60.24 : 2181,192.168.60.32 : 2181,192.168.60.37 :2181</value> 
</property> 
</configuration> 

Then i install hawq in another node(not any node installed hdfs). 

I follow http://hawq.incubator.apache.org/docs/userguide/2.3.0.0-incubating/admin/HAWQFilespacesandHighAvailabilityEnabledHDFS.html 
to set `hawq_dfs_url` as "dx/hawq_default", but init failed when check if hdfs path is available: 
[WARNING]:-ERROR: Can not connect to 'hdfs://dx:0' 

Then i change `hawq_dfs_url` to "dx:8020/hawq_default", still failed: 
ERROR Failed to setup RPC connection to "dx:8020" caused by: 
TcpSocket.cpp: 171: HdfsNetworkConnectException: Failed to resolve address "dx:8020" Name or service not known 

It consider "dx" as hostname, which is not expected. 

I also test "hdfs://dx", " file://dx ", all failed. 

I know that " 192.168.60.24 :8020" can work, but it does not use HA hdfs. 

How can hawq use HA hdfs? 

Thanks 





-- 
戴翔 
南京天数信息科技有限公司 
电话: +86 1 3382776490 
公司官网: www.sky-data.cn 
免费使用天数润科智能计算平台 SkyDiscovery 

BQ_END



-- 
戴翔 
南京天数信息科技有限公司 
电话: +86 1 3382776490 
公司官网: www.sky-data.cn 
免费使用天数润科智能计算平台 SkyDiscovery