You are viewing a plain text version of this content. The canonical link for it is here.
Posted to oak-issues@jackrabbit.apache.org by "Alex Parvulescu (JIRA)" <ji...@apache.org> on 2016/05/25 09:12:12 UTC

[jira] [Updated] (OAK-3007) SegmentStore cache does not take "string" map into account

     [ https://issues.apache.org/jira/browse/OAK-3007?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Alex Parvulescu updated OAK-3007:
---------------------------------
    Fix Version/s: 1.2.16

> SegmentStore cache does not take "string" map into account
> ----------------------------------------------------------
>
>                 Key: OAK-3007
>                 URL: https://issues.apache.org/jira/browse/OAK-3007
>             Project: Jackrabbit Oak
>          Issue Type: Bug
>          Components: segmentmk
>            Reporter: Thomas Mueller
>            Assignee: Michael Dürig
>              Labels: candidate_oak_1_0, doc-impacting, resilience, scalability
>             Fix For: 1.4, 1.3.3, 1.2.16
>
>         Attachments: OAK-3007-2.patch, OAK-3007-3.patch, OAK-3007.patch
>
>
> The SegmentStore cache size calculation ignores the size of the field Segment.string (a concurrent hash map). It looks like a regular segment in a memory mapped file has the size 1024, no matter how many strings are loaded in memory. This can lead to out of memory. There seems to be no way to limit (configure) the amount of memory used by strings. In one example, 100'000 segments are loaded in memory, and 5 GB are used for Strings in that map.
> We need a way to configure the amount of memory used for that. This seems to be basically a cache. OAK-2688 does this, but it would be better to have one cache with a configurable size limit.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)