You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@hbase.apache.org by "stack (JIRA)" <ji...@apache.org> on 2016/02/10 01:43:18 UTC

[jira] [Commented] (HBASE-15241) Blockcache only loads 100k blocks from a file

    [ https://issues.apache.org/jira/browse/HBASE-15241?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15140108#comment-15140108 ] 

stack commented on HBASE-15241:
-------------------------------

Also, the count is an int only.

> Blockcache only loads 100k blocks from a file
> ---------------------------------------------
>
>                 Key: HBASE-15241
>                 URL: https://issues.apache.org/jira/browse/HBASE-15241
>             Project: HBase
>          Issue Type: Sub-task
>          Components: BucketCache
>            Reporter: stack
>
> We can only load 100k blocks from a file. If 256Gs of SSD and blocks are 4k in size to align with SSD block read, and you want it all in cache, the 100k  limit gets in the way (The 100k may be absolute limit... checking. In UI I see 100k only).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)