You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@hbase.apache.org by "Jean-Daniel Cryans (JIRA)" <ji...@apache.org> on 2010/03/27 00:08:27 UTC

[jira] Commented: (HBASE-2382) Don't rely on fs.getDefaultReplication() to roll HLogs

    [ https://issues.apache.org/jira/browse/HBASE-2382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12850396#action_12850396 ] 

Jean-Daniel Cryans commented on HBASE-2382:
-------------------------------------------

Looks like I was confused a bit by how replication is decided. So I tried the patch and it didn't work, but that's because HBase doesn't know what the configured default replication factor is on that HDFS cluster so it creates all the files with the default 3.

@Dhruba, is there any way to interrogate HDFS for it's default replication factor?

> Don't rely on fs.getDefaultReplication() to roll HLogs
> ------------------------------------------------------
>
>                 Key: HBASE-2382
>                 URL: https://issues.apache.org/jira/browse/HBASE-2382
>             Project: Hadoop HBase
>          Issue Type: Improvement
>            Reporter: Jean-Daniel Cryans
>            Assignee: Nicolas Spiegelberg
>             Fix For: 0.20.4, 0.21.0
>
>         Attachments: HBASE-2382-20.4.patch
>
>
> As I was commenting in HBASE-2234, using fs.getDefaultReplication() to roll HLogs if they lose replicas isn't reliable since that value is client-side and unless HBase is configured with it or has Hadoop's configurations on its classpath, it will do the wrong thing.
> Dhruba added:
> bq. Can we use <hlogpath>.getFiletatus().getReplication() instead of fs.getDefaltReplication()? This will will ensure that we look at the repl factor of the precise file we are interested in, rather than what the system-wide default value is.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.