You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@hbase.apache.org by "Anoop Sam John (JIRA)" <ji...@apache.org> on 2017/03/22 18:04:41 UTC
[jira] [Created] (HBASE-17819) Reduce the heap overhead for
BucketCache
Anoop Sam John created HBASE-17819:
--------------------------------------
Summary: Reduce the heap overhead for BucketCache
Key: HBASE-17819
URL: https://issues.apache.org/jira/browse/HBASE-17819
Project: HBase
Issue Type: Sub-task
Reporter: Anoop Sam John
Assignee: Anoop Sam John
Fix For: 2.0.0
We keep Bucket entry map in BucketCache. Below is the math for heapSize for the key , value into this map.
BlockCacheKey
---------------
String hfileName - Ref - 4
long offset - 8
BlockType blockType - Ref - 4
boolean isPrimaryReplicaBlock - 1
Total = 12 (Object) + 17 = 29
BucketEntry
------------
int offsetBase - 4
int length - 4
byte offset1 - 1
byte deserialiserIndex - 1
long accessCounter - 8
BlockPriority priority - Ref - 4
volatile boolean markedForEvict - 1
AtomicInteger refCount - 16 + 4
long cachedTime - 8
Total = 12 (Object) + 51 = 63
ConcurrentHashMap Map.Entry - 40
blocksByHFile ConcurrentSkipListSet Entry - 40
Total = 29 + 63 + 80 = 172
For 10 million blocks we will end up having 1.6GB of heap size.
This jira aims to reduce this as much as possible
--
This message was sent by Atlassian JIRA
(v6.3.15#6346)