You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@hbase.apache.org by "Anoop Sam John (JIRA)" <ji...@apache.org> on 2018/03/14 05:59:00 UTC

[jira] [Updated] (HBASE-11425) Cell/DBB end-to-end on the read-path

     [ https://issues.apache.org/jira/browse/HBASE-11425?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Anoop Sam John updated HBASE-11425:
-----------------------------------
    Affects Version/s:     (was: 0.99.0)
         Release Note: 
For E2E off heaped read path, first of all there should be an off heap backed BucketCache(BC). Configure 'hbase.bucketcache.ioengine' to offheap in hbase-site.xml. Also to specify the total capacity of the BC using hbase.bucketcache.size config.  Please remember to adjust value of 'HBASE_OFFHEAPSIZE' in hbase-env.sh as per this capacity. Here by we specify the max possible off heap memory allocation by the RS java process. So this should be bigger than the off heap BC size. Please keep in mind that there is no default for hbase.bucketcache.ioengine means the BC is turned OFF by default.
Next thing to tune is the ByteBuffer pool in the RPC server side. The buffers from this pool will be used to accumulate the cell bytes and create a result cell block to be send back to the client side. 'hbase.ipc.server.reservoir.enabled' can be used to turn this pool ON or OFF. By default this pool is ON and available. HBase will create off heap ByteBuffers and pool them. Please make sure not to turn this OFF if you want E2E off heaping in read path. If this pool is turned off, the server will create temp buffers on heap to accumulate the cell bytes and make a result cell block. This can impact the GC on a highly read loaded server.  The user can tune this pool wrt how buffers to be there in pool and what should be the size of each of this ByteBuffer.
Use the config 'hbase.ipc.server.reservoir.initial.buffer.size' to tune each of the buffer's size. This defaults to 64 KB. When the read pattern is a random row read and each of the rows are smaller in size compared to this 64 KB, try reducing this. When the result size is larger than one ByteBuffer size, the server will try to grab more than one buffer and make the result cell block out of those.  When the pool is running out of buffers, to return the cell block result, the server will end up creating temp on heap buffers. The max number of ByteBuffers in the pool can be tuned using the config 'hbase.ipc.server.reservoir.initial.max'. Its value defaults to 64 * region server handlers configured (See also the config 'hbase.regionserver.handler.count'). The math is such that by default we consider 2 MB for result cell block size per read result and each handler will be handling this read. For 2 MB size, we need 32 buffers each of size 64 KB (See default buffer size in pool).  So per handler 32 BBs. We allocate twice this size as the max BBs count such that one handler created the response and handed over it to the RPC Responder thread and again handling one more request and can create the reponse cell block (using pooled buffers). Even if the responder could not send back the first TCP reply, the pool can handle.  Again for smaller sized random row reads, tune this max count. This is any way lazily created buffers and its is the max count to be pooled.  The setting for HBASE_OFFHEAPSIZE in hbase-env.sh should consider this off heap buffer pool at RPC side also.  We need to config this max off heap size for RS as a bit higher than the sum of this max pool size and the off heap cache size. The TCP layer will also need create direct bytebuffers for TCP communication. Also the DFS client will need some more. An extra of 1 - 2 GB for the max direct memory size would work as per tests.

If you still see GC issues even after making E2E read path off heap, look for the possibility of such in appropriate buffer pool. Check the below RS log with INFO level
"Pool already reached its max capacity : XXX and no free buffers now. Consider increasing the value for 'hbase.ipc.server.reservoir.initial.max' ?"

If you are using co processors and refer the Cells in the read results, DO NOT store reference to these Cells out of the scope of the CP hook methods. Some times the CPs need store info about the cell (Like its row key) for considering in the next CP hook call etc. For such cases, pls clone the required fields of the entire Cell as per the use cases.  [ See CellUtil#cloneXXX(Cell) APIs ]

> Cell/DBB end-to-end on the read-path
> ------------------------------------
>
>                 Key: HBASE-11425
>                 URL: https://issues.apache.org/jira/browse/HBASE-11425
>             Project: HBase
>          Issue Type: Umbrella
>          Components: regionserver, Scanners
>            Reporter: Anoop Sam John
>            Assignee: Anoop Sam John
>            Priority: Major
>             Fix For: 2.0.0
>
>         Attachments: BenchmarkTestCode.zip, Benchmarks_Tests.docx, GC pics with evictions_4G heap.png, HBASE-11425-E2E-NotComplete.patch, HBASE-11425.patch, Offheap reads in HBase using BBs_V2.pdf, Offheap reads in HBase using BBs_final.pdf, Screen Shot 2015-10-16 at 5.13.22 PM.png, gc.png, gets.png, heap.png, load.png, median.png, ram.log
>
>
> Umbrella jira to make sure we can have blocks cached in offheap backed cache. In the entire read path, we can refer to this offheap buffer and avoid onheap copying.
> The high level items I can identify as of now are
> 1. Avoid the array() call on BB in read path.. (This is there in many classes. We can handle class by class)
> 2. Support Buffer based getter APIs in cell.  In read path we will create a new Cell with backed by BB. Will need in CellComparator, Filter (like SCVF), CPs etc.
> 3. Avoid KeyValue.ensureKeyValue() calls in read path - This make byte copy.
> 4. Remove all CP hooks (which are already deprecated) which deal with KVs.  (In read path)
> Will add subtasks under this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)