You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@flink.apache.org by "wanglei2@geekplus.com.cn" <wa...@geekplus.com.cn> on 2020/05/07 09:04:08 UTC
flink how to access remote hdfs using namenode nameservice
According to https://ci.apache.org/projects/flink/flink-docs-stable/ops/jobmanager_high_availability.html
I am deploying standalone cluster with jobmanager HA and need the hdfs address:
high-availability.storageDir: hdfs:///flink/recovery
My hadoop is a remote cluster. I can write it as hdfs://active-namenode-ip:8020. But this way lost namenode HA
Is there's any method that I can config it as hdfs://name-service:8020
Thanks,
Lei
wanglei2@geekplus.com.cn
Re: flink how to access remote hdfs using namenode nameservice
Posted by Yang Wang <da...@gmail.com>.
Do you mean to use the hdfs nameservice? You could find it with config key
"dfs.nameservices" in hdfs-site.xml. For example,
hdfs://myhdfs/flink/recovery.
Please keep in mind that you need to set the HADOOP_CONF_DIR environment
beforehand.
Best,
Yang
wanglei2@geekplus.com.cn <wa...@geekplus.com.cn> 于2020年5月7日周四 下午5:04写道:
>
> According to
> https://ci.apache.org/projects/flink/flink-docs-stable/ops/jobmanager_high_availability.html
>
>
> I am deploying standalone cluster with jobmanager HA and need the hdfs
> address:
>
> high-availability.storageDir: hdfs:///flink/recovery
>
> My hadoop is a remote cluster. I can write it as
> hdfs://active-namenode-ip:8020. But this way lost namenode HA
>
> Is there's any method that I can config it as hdfs://name-service:8020
>
> Thanks,
> Lei
>
>
> ------------------------------
> wanglei2@geekplus.com.cn
>
>