You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@hbase.apache.org by "Matteo Bertozzi (JIRA)" <ji...@apache.org> on 2014/02/03 20:23:11 UTC

[jira] [Updated] (HBASE-10319) HLog should roll periodically to allow DN decommission to eventually complete.

     [ https://issues.apache.org/jira/browse/HBASE-10319?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Matteo Bertozzi updated HBASE-10319:
------------------------------------

    Attachment: HBASE-10319-v0.patch

v0 forces the roll if the logroll.period is elapsed

> HLog should roll periodically to allow DN decommission to eventually complete.
> ------------------------------------------------------------------------------
>
>                 Key: HBASE-10319
>                 URL: https://issues.apache.org/jira/browse/HBASE-10319
>             Project: HBase
>          Issue Type: Bug
>            Reporter: Jonathan Hsieh
>         Attachments: HBASE-10319-v0.patch
>
>
> We encountered a situation where we had an esseitially read only table and attempted to do a clean HDFS DN decommission.  DN's cannot decomission if there are open blocks being written to currently on it.  Because the hbase Hlog file was open, had some data (hlog header), the DN could not decommission itself.  Since no new data is ever written, the existing periodic check is not activated.
> After discussing with [~atm], it seems that although an hdfs semantics change would be ideal (e.g. hbase doesn't have to be aware of hdfs decommission and the client would roll over) this would take much more effort than having hbase periodically force a log roll.  This would enable the hdfs dn con complete.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)