You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@hbase.apache.org by "Anoop Sam John (JIRA)" <ji...@apache.org> on 2016/07/14 07:39:20 UTC

[jira] [Created] (HBASE-16229) Cleaning up size and heapSize calculation

Anoop Sam John created HBASE-16229:
--------------------------------------

             Summary: Cleaning up size and heapSize calculation
                 Key: HBASE-16229
                 URL: https://issues.apache.org/jira/browse/HBASE-16229
             Project: HBase
          Issue Type: Sub-task
    Affects Versions: 2.0.0
            Reporter: Anoop Sam John
            Assignee: Anoop Sam John
             Fix For: 2.0.0


It is bit ugly now. For eg:
AbstractMemStore
{code}
public final static long FIXED_OVERHEAD = ClassSize.align(
      ClassSize.OBJECT +
          (4 * ClassSize.REFERENCE) +
          (2 * Bytes.SIZEOF_LONG));

  public final static long DEEP_OVERHEAD = ClassSize.align(FIXED_OVERHEAD +
      (ClassSize.ATOMIC_LONG + ClassSize.TIMERANGE_TRACKER +
      ClassSize.CELL_SKIPLIST_SET + ClassSize.CONCURRENT_SKIPLISTMAP));
{code}
We include the heap overhead of Segment also here. It will be better the Segment contains its overhead part and the Memstore impl uses the heap size of all of its segments to calculate its size.
Also this
{code}
public long heapSize() {
    return getActive().getSize();
  }
{code}
HeapSize to consider all segment's size not just active's. I am not able to see an override method in CompactingMemstore.

This jira tries to solve some of these.
When we create a Segment, we seems pass some initial heap size value to it. Why?  The segment object internally has to know what is its heap size not like some one else dictate it.

More to add when doing this cleanup




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)