You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@hbase.apache.org by "Zheng Hu (JIRA)" <ji...@apache.org> on 2019/05/06 02:09:00 UTC
[jira] [Commented] (HBASE-22339) BucketCache capacity limit may be
wrong
[ https://issues.apache.org/jira/browse/HBASE-22339?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16833496#comment-16833496 ]
Zheng Hu commented on HBASE-22339:
----------------------------------
I think you are right, [~puleya7]. You want to config a 128TB bucket cache ? what's your requirement ?
> BucketCache capacity limit may be wrong
> ---------------------------------------
>
> Key: HBASE-22339
> URL: https://issues.apache.org/jira/browse/HBASE-22339
> Project: HBase
> Issue Type: Improvement
> Components: BucketCache
> Reporter: puleya7
> Priority: Trivial
> Labels: cache
>
> In the constructor of BC, the capacity limit looks like 32TB, code as follow:
>
> {code:java}
> long blockNumCapacity = capacity / blockSize;
> if (blockNumCapacity >= Integer.MAX_VALUE) {
> // Enough for about 32TB of cache!
> throw new IllegalArgumentException("Cache capacity is too large, only support 32TB now");
> }
> {code}
> Default blockSize is 64*1024
> Integer.MAX_VALUE is 2147483647
> blockNumCapacity >= Integer.MAX_VALUE means capacity >= 127.99TB, not 32TB.
>
>
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)