You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@kylin.apache.org by "hongbin ma (JIRA)" <ji...@apache.org> on 2016/04/21 12:11:26 UTC

[jira] [Created] (KYLIN-1601) Need not to shrink scan cache when hbase rows can be large

hongbin ma created KYLIN-1601:
---------------------------------

             Summary: Need not to shrink scan cache when hbase rows can be large
                 Key: KYLIN-1601
                 URL: https://issues.apache.org/jira/browse/KYLIN-1601
             Project: Kylin
          Issue Type: Bug
            Reporter: hongbin ma
            Assignee: hongbin ma


to control memory usage we used to shrink scan cache when hbase rows can be large

        if (RowValueDecoder.hasMemHungryMeasures(rowValueDecoders)) {
            scan.setCaching(scan.getCaching() / 10);
        }

however since now  scan.setCaching is accompanied by scan.setMaxResultSize, it's no longer necessary because the size limit will come first before cache rows limit.

quote from http://www.cloudera.com/documentation/enterprise/5-2-x/topics/admin_hbase_scanning.htm:

"When you use setCaching and setMaxResultSize together, single server requests are limited by either number of rows or maximum result size, whichever limit comes first."



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)