You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@flink.apache.org by Raul Valdoleiros <ra...@gmail.com> on 2018/05/21 14:19:40 UTC

Multiple hdfs

Hi,

I want to store my data in one hdfs and the flink checkpoints in another
hdfs. I didn't find a way to do it, anyone can point me a direction?

Thanks in advance,
Raul

Re: Multiple hdfs

Posted by Rong Rong <wa...@gmail.com>.
+1 on viewfs, I was going to add that :-)

To add to this, viewfs can be use as a federation layer for supporting
multiple HDFS clusters for checkpoint/savepoint HA purpose as well.

--
Rong

On Wed, May 23, 2018 at 6:29 AM, Stephan Ewen <se...@apache.org> wrote:

> I think that Hadoop recommends to solve such setups with a viewfs:// that
> spans both HDFS clusters and then the two different clusters look like
> different paths within on file system. Similar as mounting different file
> systems into one directory tree in unix.
>
> On Tue, May 22, 2018 at 4:41 PM, Kien Truong <du...@gmail.com>
> wrote:
>
>> You only need to modify the core-site and hdfs-site read by Flink.
>>
>> Regards,
>>
>> Kiên
>> On 5/22/2018 9:07 PM, Deepak Sharma wrote:
>>
>> Wouldnt 2 core-site and hdfs-site xmls need to be provided in this case
>> then ?
>>
>> Thanks
>> Deepak
>>
>> On Tue, May 22, 2018, 19:34 Raul Valdoleiros <
>> raul.valdoleiros.oliveira@gmail.com> wrote:
>>
>>> Hi Kien,
>>>
>>> Thanks for you reply.
>>>
>>> Your goal is to store the checkpoints in one hdfs cluster and the data
>>> in other hdfs cluster.
>>>
>>> So the flink should be able to connect to two different hdfs clusters.
>>>
>>> Thanks
>>>
>>> 2018-05-22 15:00 GMT+01:00 Kien Truong <du...@gmail.com>:
>>>
>>>> Hi,
>>>>
>>>> If your cluster are not high-availability clusters then just use the
>>>> full path to the cluster.
>>>>
>>>> For example, to refer to directory /checkpoint on cluster1, use
>>>> hdfs://namenode1_ip:port/checkpoint
>>>>
>>>> Like wise, /data on cluster2 will be hdfs://namenode2_ip:port/data
>>>>
>>>>
>>>> If your cluster is a HA cluster, then you need to modify the
>>>> hdfs-site.xml like section 1 of this guide
>>>>
>>>> https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.4/bk_
>>>> administration/content/distcp_between_ha_clusters.html
>>>>
>>>> Then use the full path to the cluster hdfs://cluster1ha/checkpoint &
>>>> hdfs://cluster2ha/data
>>>>
>>>> Regards,
>>>> Kien
>>>>
>>>>
>>>> On 5/21/2018 9:19 PM, Raul Valdoleiros wrote:
>>>>
>>>>> Hi,
>>>>>
>>>>> I want to store my data in one hdfs and the flink checkpoints in
>>>>> another hdfs. I didn't find a way to do it, anyone can point me a direction?
>>>>>
>>>>> Thanks in advance,
>>>>> Raul
>>>>>
>>>>
>>>
>

Re: Multiple hdfs

Posted by Stephan Ewen <se...@apache.org>.
I think that Hadoop recommends to solve such setups with a viewfs:// that
spans both HDFS clusters and then the two different clusters look like
different paths within on file system. Similar as mounting different file
systems into one directory tree in unix.

On Tue, May 22, 2018 at 4:41 PM, Kien Truong <du...@gmail.com>
wrote:

> You only need to modify the core-site and hdfs-site read by Flink.
>
> Regards,
>
> Kiên
> On 5/22/2018 9:07 PM, Deepak Sharma wrote:
>
> Wouldnt 2 core-site and hdfs-site xmls need to be provided in this case
> then ?
>
> Thanks
> Deepak
>
> On Tue, May 22, 2018, 19:34 Raul Valdoleiros <raul.valdoleiros.oliveira@
> gmail.com> wrote:
>
>> Hi Kien,
>>
>> Thanks for you reply.
>>
>> Your goal is to store the checkpoints in one hdfs cluster and the data in
>> other hdfs cluster.
>>
>> So the flink should be able to connect to two different hdfs clusters.
>>
>> Thanks
>>
>> 2018-05-22 15:00 GMT+01:00 Kien Truong <du...@gmail.com>:
>>
>>> Hi,
>>>
>>> If your cluster are not high-availability clusters then just use the
>>> full path to the cluster.
>>>
>>> For example, to refer to directory /checkpoint on cluster1, use
>>> hdfs://namenode1_ip:port/checkpoint
>>>
>>> Like wise, /data on cluster2 will be hdfs://namenode2_ip:port/data
>>>
>>>
>>> If your cluster is a HA cluster, then you need to modify the
>>> hdfs-site.xml like section 1 of this guide
>>>
>>> https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.4/
>>> bk_administration/content/distcp_between_ha_clusters.html
>>>
>>> Then use the full path to the cluster hdfs://cluster1ha/checkpoint &
>>> hdfs://cluster2ha/data
>>>
>>> Regards,
>>> Kien
>>>
>>>
>>> On 5/21/2018 9:19 PM, Raul Valdoleiros wrote:
>>>
>>>> Hi,
>>>>
>>>> I want to store my data in one hdfs and the flink checkpoints in
>>>> another hdfs. I didn't find a way to do it, anyone can point me a direction?
>>>>
>>>> Thanks in advance,
>>>> Raul
>>>>
>>>
>>

Re: Multiple hdfs

Posted by Kien Truong <du...@gmail.com>.
You only need to modify the core-site and hdfs-site read by Flink.

Regards,

Kiên

On 5/22/2018 9:07 PM, Deepak Sharma wrote:
> Wouldnt 2 core-site and hdfs-site xmls need to be provided in this 
> case then ?
>
> Thanks
> Deepak
>
> On Tue, May 22, 2018, 19:34 Raul Valdoleiros 
> <raul.valdoleiros.oliveira@gmail.com 
> <ma...@gmail.com>> wrote:
>
>     Hi Kien,
>
>     Thanks for you reply.
>
>     Your goal is to store the checkpoints in one hdfs cluster and the
>     data in other hdfs cluster.
>
>     So the flink should be able to connect to two different hdfs clusters.
>
>     Thanks
>
>     2018-05-22 15:00 GMT+01:00 Kien Truong <duckientruong@gmail.com
>     <ma...@gmail.com>>:
>
>         Hi,
>
>         If your cluster are not high-availability clusters then just
>         use the full path to the cluster.
>
>         For example, to refer to directory /checkpoint on cluster1,
>         use hdfs://namenode1_ip:port/checkpoint
>
>         Like wise, /data on cluster2 will be hdfs://namenode2_ip:port/data
>
>
>         If your cluster is a HA cluster, then you need to modify the
>         hdfs-site.xml like section 1 of this guide
>
>         https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.4/bk_administration/content/distcp_between_ha_clusters.html
>
>         Then use the full path to the cluster
>         hdfs://cluster1ha/checkpoint & hdfs://cluster2ha/data
>
>         Regards,
>         Kien
>
>
>         On 5/21/2018 9:19 PM, Raul Valdoleiros wrote:
>
>             Hi,
>
>             I want to store my data in one hdfs and the flink
>             checkpoints in another hdfs. I didn't find a way to do it,
>             anyone can point me a direction?
>
>             Thanks in advance,
>             Raul
>
>

Re: Multiple hdfs

Posted by Deepak Sharma <de...@gmail.com>.
Wouldnt 2 core-site and hdfs-site xmls need to be provided in this case
then ?

Thanks
Deepak

On Tue, May 22, 2018, 19:34 Raul Valdoleiros <
raul.valdoleiros.oliveira@gmail.com> wrote:

> Hi Kien,
>
> Thanks for you reply.
>
> Your goal is to store the checkpoints in one hdfs cluster and the data in
> other hdfs cluster.
>
> So the flink should be able to connect to two different hdfs clusters.
>
> Thanks
>
> 2018-05-22 15:00 GMT+01:00 Kien Truong <du...@gmail.com>:
>
>> Hi,
>>
>> If your cluster are not high-availability clusters then just use the full
>> path to the cluster.
>>
>> For example, to refer to directory /checkpoint on cluster1, use
>> hdfs://namenode1_ip:port/checkpoint
>>
>> Like wise, /data on cluster2 will be hdfs://namenode2_ip:port/data
>>
>>
>> If your cluster is a HA cluster, then you need to modify the
>> hdfs-site.xml like section 1 of this guide
>>
>>
>> https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.4/bk_administration/content/distcp_between_ha_clusters.html
>>
>> Then use the full path to the cluster hdfs://cluster1ha/checkpoint &
>> hdfs://cluster2ha/data
>>
>> Regards,
>> Kien
>>
>>
>> On 5/21/2018 9:19 PM, Raul Valdoleiros wrote:
>>
>>> Hi,
>>>
>>> I want to store my data in one hdfs and the flink checkpoints in another
>>> hdfs. I didn't find a way to do it, anyone can point me a direction?
>>>
>>> Thanks in advance,
>>> Raul
>>>
>>
>

Re: Multiple hdfs

Posted by Raul Valdoleiros <ra...@gmail.com>.
Hi Kien,

Thanks for you reply.

Your goal is to store the checkpoints in one hdfs cluster and the data in
other hdfs cluster.

So the flink should be able to connect to two different hdfs clusters.

Thanks

2018-05-22 15:00 GMT+01:00 Kien Truong <du...@gmail.com>:

> Hi,
>
> If your cluster are not high-availability clusters then just use the full
> path to the cluster.
>
> For example, to refer to directory /checkpoint on cluster1, use
> hdfs://namenode1_ip:port/checkpoint
>
> Like wise, /data on cluster2 will be hdfs://namenode2_ip:port/data
>
>
> If your cluster is a HA cluster, then you need to modify the hdfs-site.xml
> like section 1 of this guide
>
> https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.4/bk_
> administration/content/distcp_between_ha_clusters.html
>
> Then use the full path to the cluster hdfs://cluster1ha/checkpoint &
> hdfs://cluster2ha/data
>
> Regards,
> Kien
>
>
> On 5/21/2018 9:19 PM, Raul Valdoleiros wrote:
>
>> Hi,
>>
>> I want to store my data in one hdfs and the flink checkpoints in another
>> hdfs. I didn't find a way to do it, anyone can point me a direction?
>>
>> Thanks in advance,
>> Raul
>>
>

Re: Multiple hdfs

Posted by Kien Truong <du...@gmail.com>.
Hi,

If your cluster are not high-availability clusters then just use the 
full path to the cluster.

For example, to refer to directory /checkpoint on cluster1, use 
hdfs://namenode1_ip:port/checkpoint

Like wise, /data on cluster2 will be hdfs://namenode2_ip:port/data


If your cluster is a HA cluster, then you need to modify the 
hdfs-site.xml like section 1 of this guide

https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.4/bk_administration/content/distcp_between_ha_clusters.html

Then use the full path to the cluster hdfs://cluster1ha/checkpoint & 
hdfs://cluster2ha/data

Regards,
Kien

On 5/21/2018 9:19 PM, Raul Valdoleiros wrote:
> Hi,
>
> I want to store my data in one hdfs and the flink checkpoints in 
> another hdfs. I didn't find a way to do it, anyone can point me a 
> direction?
>
> Thanks in advance,
> Raul