You are viewing a plain text version of this content. The canonical link for it is here.
Posted to jira@arrow.apache.org by "Antoine Pitrou (Jira)" <ji...@apache.org> on 2020/06/25 16:24:00 UTC
[jira] [Commented] (ARROW-9226) [Python]
pyarrow.fs.HadoopFileSystem - retrieve options from core-site.xml or
hdfs-site.xml if available
[ https://issues.apache.org/jira/browse/ARROW-9226?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17145070#comment-17145070 ]
Antoine Pitrou commented on ARROW-9226:
---------------------------------------
> 'Legacy' pyarrow.hdfs.connect was somehow able to get the namenode info from the hadoop configuration files.
Can you elaborate? Did it happen automatically or did you have to pass an option?
> [Python] pyarrow.fs.HadoopFileSystem - retrieve options from core-site.xml or hdfs-site.xml if available
> --------------------------------------------------------------------------------------------------------
>
> Key: ARROW-9226
> URL: https://issues.apache.org/jira/browse/ARROW-9226
> Project: Apache Arrow
> Issue Type: Improvement
> Components: C++, Python
> Affects Versions: 0.17.1
> Reporter: Bruno Quinart
> Priority: Minor
> Fix For: 1.0.0
>
>
> 'Legacy' pyarrow.hdfs.connect was somehow able to get the namenode info from the hadoop configuration files.
> The new pyarrow.fs.HadoopFileSystem requires the host to be specified.
> Inferring this info from "the environment" makes it easier to deploy pipelines.
> But more important, for HA namenodes it is almost impossible to know for sure what to specify. If a rolling restart is ongoing, the namenode is changing. There is no guarantee on which will be active in a HA setup.
> I tried connecting to the standby namenode. The connection gets established, but when writing a file an error is raised that standby namenodes are not allowed to write to.
>
--
This message was sent by Atlassian Jira
(v8.3.4#803005)