You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@jackrabbit.apache.org by "Danilo Ghirardelli (JIRA)" <ji...@apache.org> on 2011/09/01 16:13:15 UTC

[jira] [Commented] (JCR-2892) Large fetch sizes have potentially deleterious effects on VM memory requirements when using Oracle

    [ https://issues.apache.org/jira/browse/JCR-2892?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13095306#comment-13095306 ] 

Danilo Ghirardelli commented on JCR-2892:
-----------------------------------------

I have the same problem, 10000 in fetch size is causing out of memory, even jackrabbit version 2.2.8.
In my case it seems to be triggered by clustering configuration, because probably that will return more than a single row, and in this case oracle will allocate all the necessary ram for about 10000 rows.

You can try with Oracle 10 (XE edition also causes the problem), driver 10.2.0.4 (or 10.2.0.5), default java xms size (32 bit version, not 64, on Windows platform) and clustering configured. The simple existence of clustering causes the crash in my case, you may want to save a few nodes to force something in the clustering tables.

Anyway, that size of fetch size is dangerous. If it was set only to limit postgresql allocation and you are sure that the average query will return a single record, you should hardcode it to 10... I tried that the application will start correctly with a value of 10 and 100 but not with 1000 or above.

> Large fetch sizes have potentially deleterious effects on VM memory requirements when using Oracle
> --------------------------------------------------------------------------------------------------
>
>                 Key: JCR-2892
>                 URL: https://issues.apache.org/jira/browse/JCR-2892
>             Project: Jackrabbit Content Repository
>          Issue Type: Bug
>          Components: jackrabbit-core, sql
>    Affects Versions: 2.2.2
>         Environment: Oracle 10g+
>            Reporter: Christopher Elkins
>
> Since Release 10g, Oracle JDBC drivers use the fetch size to allocate buffers for caching row data.
> cf. http://www.oracle.com/technetwork/database/enterprise-edition/memory.pdf
> r1060431 hard-codes the fetch size for all ResultSet-returning statements to 10,000. This value has significant, potentially deleterious, effects on the heap space required for even moderately-sized repositories. For example, the BUNDLE table (from 'oracle.ddl') has two columns -- NODE_ID raw(16) and BUNDLE_DATA blob -- which require 16 b and 4 kb of buffer space, respectively. This requires a buffer of more than 40 mb [(16+4096) * 10000 = 41120000].
> If the issue described in JCR-2832 is truly specific to PostgreSQL, I think its resolution should be moved to a PostgreSQL-specific ConnectionHelper subclass. Failing that, there should be a way to override this hard-coded value in OracleConnectionHelper.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira