You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@hbase.apache.org by "stack (Jira)" <ji...@apache.org> on 2019/09/04 04:43:00 UTC

[jira] [Resolved] (HBASE-22951) [HBCK2] hbase hbck throws IOE "No FileSystem for scheme: hdfs"

     [ https://issues.apache.org/jira/browse/HBASE-22951?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

stack resolved HBASE-22951.
---------------------------
    Fix Version/s: hbase-operator-tools-1.0.0
     Hadoop Flags: Reviewed
       Resolution: Fixed

Merged. Thanks for review [~busbey] (I keep forgetting to add you as signed-off-by. Somehow it is just you. I add others easily).

> [HBCK2] hbase hbck throws IOE "No FileSystem for scheme: hdfs"
> --------------------------------------------------------------
>
>                 Key: HBASE-22951
>                 URL: https://issues.apache.org/jira/browse/HBASE-22951
>             Project: HBase
>          Issue Type: Bug
>          Components: documentation, hbck2
>            Reporter: stack
>            Assignee: stack
>            Priority: Major
>             Fix For: hbase-operator-tools-1.0.0
>
>
> Input appreciated on this one.
> If I do the below, passing a config that is pointing at a HDFS, I get the below (If I run w/o, hbck just picks up the wrong fs -- the local fs).
> {code}
> $ /vagrant/hbase/bin/hbase --config hbase-conf  hbck
> 2019-08-30 05:04:54,467 WARN  [main] util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
> Exception in thread "main" java.io.IOException: No FileSystem for scheme: hdfs
>         at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2799)
>         at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2810)
>         at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:100)
>         at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2849)
>         at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2831)
>         at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:389)
>         at org.apache.hadoop.fs.Path.getFileSystem(Path.java:356)
>         at org.apache.hadoop.hbase.util.CommonFSUtils.getRootDir(CommonFSUtils.java:361)
>         at org.apache.hadoop.hbase.util.HBaseFsck.main(HBaseFsck.java:3605)
> {code}
> Its because the CLASSPATH is carefully curated so as to use shaded client only; there are no hdfs classes on the CLASSPATH intentionally.
> So, how to fix? Happens whether hbck1 or hbck2 (you have to do a hdfs operation for hbck2 to trigger same issue).
> Could be careful in hbck2 and note that if fs operation, you need to add hdfs jars to CLASSPATH so hbck2 can go against hdfs.
> If add the ' --internal-classpath' flag, then all classes are put on the CLASSPATH for hbck(2) (including the hdfs client jar which got the hdfs implementation after 2.7.2 was released) and stuff 'works'.
> Could edit the bin/hbase script and make it so hdfs classes are added to the hbck CLASSPATH? Could see if could do hdfs client-only?
> Anyways, putting this up for now. Others may have opinions. Thanks.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)