You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@trafodion.apache.org by "Suresh Subbiah (JIRA)" <ji...@apache.org> on 2015/10/08 00:38:27 UTC

[jira] [Closed] (TRAFODION-1482) disabling BlockCache for all unbounded scan is not correct for dictionary tables

     [ https://issues.apache.org/jira/browse/TRAFODION-1482?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Suresh Subbiah closed TRAFODION-1482.
-------------------------------------

> disabling BlockCache for all unbounded scan is not correct for dictionary tables
> --------------------------------------------------------------------------------
>
>                 Key: TRAFODION-1482
>                 URL: https://issues.apache.org/jira/browse/TRAFODION-1482
>             Project: Apache Trafodion
>          Issue Type: Bug
>          Components: sql-cmp, sql-exe
>    Affects Versions: 1.1 (pre-incubation)
>            Reporter: Eric Owhadi
>            Assignee: Suresh Subbiah
>              Labels: performance
>
> There is a workaround that was implemented to avoid cacheBlock trashing triggered by full table scan.
> It is in HTableClient.java, line looking like:
> //Disable block cache for full table scan
> If (startRow == null && stopRow == null)
>                 Scan.setCacheBlocks(false);
>  
> This line bypass the cacheBlocks parameter passed to the startScan, hence is a workaround.
>  
> However, this is a potentially negative workaround from some other performance angle on situations like “dictionary tables” on normalized schema.
> For example, if you have a table storing status code, error code, country etc , and linked to with foreign key, these tables are small and I would imagine they will most likely be fetched and spread on esps for hash joins with startRow and stopRow null. They won’t be cached with the workaround, but should be. Cache trashing is a problem only when scanning large tables.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)