You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@arrow.apache.org by "Micah Kornfield (JIRA)" <ji...@apache.org> on 2019/04/25 20:09:00 UTC

[jira] [Resolved] (ARROW-5049) [Python] org/apache/hadoop/fs/FileSystem class not found when pyarrow FileSystem used in spark

     [ https://issues.apache.org/jira/browse/ARROW-5049?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Micah Kornfield resolved ARROW-5049.
------------------------------------
    Resolution: Fixed

Issue resolved by pull request 4081
[https://github.com/apache/arrow/pull/4081]

> [Python] org/apache/hadoop/fs/FileSystem class not found when pyarrow FileSystem used in spark
> ----------------------------------------------------------------------------------------------
>
>                 Key: ARROW-5049
>                 URL: https://issues.apache.org/jira/browse/ARROW-5049
>             Project: Apache Arrow
>          Issue Type: Bug
>          Components: Python
>    Affects Versions: 0.12.0, 0.13.0, 0.12.1
>            Reporter: Tiger068
>            Assignee: Tiger068
>            Priority: Major
>              Labels: pull-request-available
>             Fix For: 0.14.0
>
>          Time Spent: 3h
>  Remaining Estimate: 0h
>
> when i init pyarrow filesystem to connect hdfs clusfter in spark,the libhdfs throws error:
> {code:java}
> org/apache/hadoop/fs/FileSystem class not found 
> {code}
> I print out the CLASSPATH, the classpath value is wildcard mode  
> {code:java}
> ../share/hadoop/hdfs;spark/spark-2.0.2-bin-hadoop2.7/jars...
> {code}
> The value is set by spark,but libhdfs must load class from jar files.
>  
> Root cause is:
> In hdfs.py we just check the string  ''hadoop"  in classpath,but not jar file
> {code:java}
> def _maybe_set_hadoop_classpath():
>     if 'hadoop' in os.environ.get('CLASSPATH', ''):
>         return{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)