You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@hbase.apache.org by "Michael Stack (Jira)" <ji...@apache.org> on 2021/08/16 17:58:00 UTC

[jira] [Commented] (HBASE-26017) hbase performance evaluation tool could not support datasize more than 2048g

    [ https://issues.apache.org/jira/browse/HBASE-26017?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17399898#comment-17399898 ] 

Michael Stack commented on HBASE-26017:
---------------------------------------

The PR looks good. I left some comments. Can do the refactor of pe to do bigger sizes in a follow-on.

> hbase performance evaluation tool  could not support datasize more than 2048g
> -----------------------------------------------------------------------------
>
>                 Key: HBASE-26017
>                 URL: https://issues.apache.org/jira/browse/HBASE-26017
>             Project: HBase
>          Issue Type: Bug
>          Components: PE
>    Affects Versions: 2.1.0, 2.3.2, 2.4.4
>            Reporter: dingwei2019
>            Assignee: dingwei2019
>            Priority: Minor
>
> in our daily test, we may hope to test more datasize than 2048g, when we set --size more than 2048g, pe print abnormal message like this:
> [TestClient-1] hbase.PerformanceEvaluation: Start class org.apache.hadoop.hbase.PerformanceEvaluation$SequentialWriteTest at offset -1138166308 for -21474836 rows
>  
> this is due to variable totalRows in TestOptions defined by int(-2147483648  --- 2147483647), One GB is 1048576(1024*1024) by default. The max value of totalRow is  2147483647, in this condition, we may write not larger than 2147483647/1048576 = 2047.999G.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)