You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@hbase.apache.org by "binlijin (Jira)" <ji...@apache.org> on 2019/08/27 14:00:15 UTC

[jira] [Comment Edited] (HBASE-22072) High read/write intensive regions may cause long crash recovery

    [ https://issues.apache.org/jira/browse/HBASE-22072?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16916746#comment-16916746 ] 

binlijin edited comment on HBASE-22072 at 8/27/19 2:00 PM:
-----------------------------------------------------------

HBASE-20322 and HBASE-22072  resolve the same problem and HBASE-20322 will close memStoreScanners if there is no need to updateReaders. We need to close memStoreScanners also for branch 2 and master, if not there will be a memory leak in MemStoreLABImpl#close, because the Chunks can't be reclaimed correctly, [~javaman_chen] also mentioned it in HBASE-20373.
{code}
+      if (this.closing) {
+        // Lets close scanners created by caller, since close() won't notice this.
+        clearAndClose(memStoreScanners);
+        return;
+      }
{code}



was (Author: aoxiang):
HBASE-20322 and HBASE-22072  resolve the same problem and HBASE-20322 will close memStoreScanners if there is no need to updateReaders. We need to close memStoreScanners also for branch 2 and master, if not there will be a memory leak in MemStoreLABImpl#close, because the Chunks can't be reclaimed correctly, [~javaman_chen] also mentioned it in HBASE-20373.

+      if (this.closing) {
+        // Lets close scanners created by caller, since close() won't notice this.
+        clearAndClose(memStoreScanners);
+        return;
+      }


> High read/write intensive regions may cause long crash recovery
> ---------------------------------------------------------------
>
>                 Key: HBASE-22072
>                 URL: https://issues.apache.org/jira/browse/HBASE-22072
>             Project: HBase
>          Issue Type: Bug
>          Components: Performance, Recovery
>    Affects Versions: 2.0.0
>            Reporter: Pavel
>            Assignee: ramkrishna.s.vasudevan
>            Priority: Major
>              Labels: compaction
>             Fix For: 2.2.0, 2.3.0, 2.0.6, 2.1.5
>
>         Attachments: HBASE-22072.HBASE-21879-v1.patch
>
>
> Compaction of high read loaded region may leave compacted files undeleted because of existing scan references:
> INFO org.apache.hadoop.hbase.regionserver.HStore - Can't archive compacted file hdfs://hdfs-ha/hbase... because of either isCompactedAway=true or file has reference, isReferencedInReads=true, refCount=1, skipping for now
> If region is either high write loaded this happens quite often and region may have few storefiles and tons of undeleted compacted hdfs files.
> Region keeps all that files (in my case thousands) untill graceful region closing procedure, which ignores existing references and drop obsolete files. It works fine unless consuming some extra hdfs space, but only in case of normal region closing. If region server crashes than new region server, responsible for that overfiling region, reads hdfs folder and try to deal with all undeleted files, producing tons of storefiles, compaction tasks and consuming abnormal amount of memory, wich may lead to OutOfMemory Exception and further region servers crash. This stops writing to region because number of storefiles reach *hbase.hstore.blockingStoreFiles* limit, forces high GC duty and may take hours to compact all files into working set of files.
> Workaround is a periodically check hdfs folders files count and force region assign for ones with too many files.
> It could be nice if regionserver had a setting similar to hbase.hstore.blockingStoreFiles and invoke attempt to drop undeleted compacted files if number of files reaches this setting.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)