You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@hbase.apache.org by "Andrew Kyle Purtell (Jira)" <ji...@apache.org> on 2020/11/24 16:51:00 UTC
[jira] [Resolved] (HBASE-25050) We initialize Filesystems more than
once.
[ https://issues.apache.org/jira/browse/HBASE-25050?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Andrew Kyle Purtell resolved HBASE-25050.
-----------------------------------------
Hadoop Flags: Reviewed
Resolution: Fixed
> We initialize Filesystems more than once.
> -----------------------------------------
>
> Key: HBASE-25050
> URL: https://issues.apache.org/jira/browse/HBASE-25050
> Project: HBase
> Issue Type: Bug
> Affects Versions: 3.0.0-alpha-1, 2.3.1, 2.4.0, 2.2.6
> Reporter: ramkrishna.s.vasudevan
> Assignee: ramkrishna.s.vasudevan
> Priority: Minor
> Fix For: 3.0.0-alpha-1, 2.4.0
>
>
> In HFileSystem
> {code}
> // Create the default filesystem with checksum verification switched on.
> // By default, any operation to this FilterFileSystem occurs on
> // the underlying filesystem that has checksums switched on.
> this.fs = FileSystem.get(conf);
> this.useHBaseChecksum = useHBaseChecksum;
> fs.initialize(getDefaultUri(conf), conf);
> {code}
> We call fs.initialize(). Generally the FS would have been created and inited either in the FileSystem.get() call above or even when we try to check
> {code}
> FileSystem fs = p.getFileSystem(c);
> {code}
> The FS that gets cached in the hadoop-common layer does the init for us. So we doing it again is redundant.
--
This message was sent by Atlassian Jira
(v8.3.4#803005)