You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@arrow.apache.org by "Ben Schreck (Jira)" <ji...@apache.org> on 2019/09/01 21:20:00 UTC

[jira] [Commented] (ARROW-5922) [Python] Unable to connect to HDFS from a worker/data node on a Kerberized cluster using pyarrow' hdfs API

    [ https://issues.apache.org/jira/browse/ARROW-5922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16920516#comment-16920516 ] 

Ben Schreck commented on ARROW-5922:
------------------------------------

I fixed it by making HADOOP_HOME=/usr in the worker environment. Pyarrow for some reason sets the hadoop executable to $HADOOP_HOME/bin/hadoop on python/pyarrow/hdfs.py:L137. $HADOOP_HOME on my system was already /usr/bin/hadoop, so this resulted in /usr/bin/hadoop/bin/hadoop, which resulted in a wrong $CLASSPATH.

> [Python] Unable to connect to HDFS from a worker/data node on a Kerberized cluster using pyarrow' hdfs API
> ----------------------------------------------------------------------------------------------------------
>
>                 Key: ARROW-5922
>                 URL: https://issues.apache.org/jira/browse/ARROW-5922
>             Project: Apache Arrow
>          Issue Type: Bug
>          Components: Python
>    Affects Versions: 0.14.0
>         Environment: Unix
>            Reporter: Saurabh Bajaj
>            Priority: Major
>             Fix For: 0.14.0
>
>
> Here's what I'm trying:
> {{```}}
> {{import pyarrow as pa }}
> {{conf = \{"hadoop.security.authentication": "kerberos"} }}
> {{fs = pa.hdfs.connect(kerb_ticket="/tmp/krb5cc_44444", extra_conf=conf)}}
> {{```}}
> However, when I submit this job to the cluster using {{Dask-YARN}}, I get the following error:
> ```
> {{File "test/run.py", line 3 fs = pa.hdfs.connect(kerb_ticket="/tmp/krb5cc_44444", extra_conf=conf) File "/opt/hadoop/data/10/hadoop/yarn/local/usercache/hdfsf6/appcache/application_1560931326013_183242/container_e47_1560931326013_183242_01_000003/environment/lib/python3.7/site-packages/pyarrow/hdfs.py", line 211, in connect File "/opt/hadoop/data/10/hadoop/yarn/local/usercache/hdfsf6/appcache/application_1560931326013_183242/container_e47_1560931326013_183242_01_000003/environment/lib/python3.7/site-packages/pyarrow/hdfs.py", line 38, in __init__ File "pyarrow/io-hdfs.pxi", line 105, in pyarrow.lib.HadoopFileSystem._connect File "pyarrow/error.pxi", line 83, in pyarrow.lib.check_status pyarrow.lib.ArrowIOError: HDFS connection failed}}
> {{```}}
> I also tried setting {{host (to a name node)}} and {{port (=8020)}}, however I run into the same error. Since the error is not descriptive, I'm not sure which setting needs to be altered. Any clues anyone?



--
This message was sent by Atlassian Jira
(v8.3.2#803003)