You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@jackrabbit.apache.org by "Stefan Guggisberg (JIRA)" <ji...@apache.org> on 2009/12/16 14:43:18 UTC

[jira] Created: (JCR-2442) make internal item cache hierarchy-aware

make internal item cache hierarchy-aware
----------------------------------------

                 Key: JCR-2442
                 URL: https://issues.apache.org/jira/browse/JCR-2442
             Project: Jackrabbit Content Repository
          Issue Type: Improvement
          Components: jackrabbit-jcr2spi
            Reporter: Stefan Guggisberg
            Assignee: Michael Dürig


currently there are 2 configuration parameters which affect the performance of client-sided tree traversals:

- fetch-depth
- size of item cache

my goal is to minimize the numbers of server-roundtrips triggered by traversing the node hierarchy in the client.

the current eviction policy doesn't seem to be ideal for this use case. in the case of relatively deep tree structures
a request for e.g. '/foo' can easily cause a cache overflow and root nodes might get evicted from the cache.
a following request to '/foo' cannot be served from cache but will cause a deep fetch again, depsite the fact
that the major part of the tree structure is still in the cache.

increasing the cache size OTOH bears the risk of OOM errors since the memory footprint of the cached state seems to be quite large.

using a LRU eviction policy and touching every node along the parent hierarchy when requesting an item might be a solution.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Updated: (JCR-2442) make internal item cache hierarchy-aware

Posted by "Michael Dürig (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/JCR-2442?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Michael Dürig updated JCR-2442:
-------------------------------

    Attachment: JCR-2442.patch

Possible patch.

Since JCR-2498 should fix some of the observed performance issues, the approach in this patch is quite simplistic: use separate caches for items above and below a certain depth (nodes <=1, properties <=2). Items below the threshold go into a HashMap and are thus never evicted. All other items go into an LRU map. 

WDYT?

> make internal item cache hierarchy-aware
> ----------------------------------------
>
>                 Key: JCR-2442
>                 URL: https://issues.apache.org/jira/browse/JCR-2442
>             Project: Jackrabbit Content Repository
>          Issue Type: Improvement
>          Components: jackrabbit-jcr2spi
>            Reporter: Stefan Guggisberg
>            Assignee: Michael Dürig
>         Attachments: JCR-2442.patch
>
>
> currently there are 2 configuration parameters which affect the performance of client-sided tree traversals:
> - fetch-depth
> - size of item cache
> my goal is to minimize the number of server-roundtrips triggered by traversing the node hierarchy on the client.
> the current eviction policy doesn't seem to be ideal for this use case. in the case of relatively deep tree structures
> a request for e.g. '/foo' can easily cause a cache overflow and root nodes might get evicted from the cache.
> a following request to '/foo' cannot be served from cache but will trigger yet another deep fetch, despite the fact
> that the major part of the tree structure is still in the cache.
> increasing the cache size OTOH bears the risk of OOM errors since the memory footprint of the cached state seems 
> to be quite large. i tried several combinations of fetch depth and cache size, to no avail. i either ran into OOM errors 
> or performance was inacceptably slow due to an excessive number of server roundtrips.
> i further noticed that sync'ing existing cached state with the results of a deep fetch is rather slow, e.g.
> an inital request to '/foo' returns 11k items. the cache size is 10k, i.e. the cache cannot accomodate the entire 
> result set.  assuming that /foo has been evicted, the following request to '/foo' will trigger another deep 
> fetch which this time takes considerably moe time since the result set needs to be sync'ed with existing cached
> state. 
> using a LRU eviction policy and touching every node along the parent hierarchy when requesting an item might be a solution.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Updated: (JCR-2442) make internal item cache hierarchy-aware

Posted by "Stefan Guggisberg (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/JCR-2442?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Stefan Guggisberg updated JCR-2442:
-----------------------------------

    Description: 
currently there are 2 configuration parameters which affect the performance of client-sided tree traversals:

- fetch-depth
- size of item cache

my goal is to minimize the number of server-roundtrips triggered by traversing the node hierarchy on the client.

the current eviction policy doesn't seem to be ideal for this use case. in the case of relatively deep tree structures
a request for e.g. '/foo' can easily cause a cache overflow and root nodes might get evicted from the cache.
a following request to '/foo' cannot be served from cache but will trigger yet another deep fetch, despite the fact
that the major part of the tree structure is still in the cache.

increasing the cache size OTOH bears the risk of OOM errors since the memory footprint of the cached state seems 
to be quite large. i tried several combinations of fetch depth and cache size, to no avail. i either ran into OOM errors 
or performance was inacceptably slow due to an excessive number of server roundtrips.

i further noticed that sync'ing existing cached state with the results of a deep fetch is rather slow, e.g.
an inital request to '/foo' returns 11k items. the cache size is 10k, i.e. the cache cannot accomodate the entire 
result set.  assuming that /foo has been evicted, the following request to '/foo' will trigger another deep 
fetch which this time takes considerably moe time since the result set needs to be sync'ed with existing cached
state. 

using a LRU eviction policy and touching every node along the parent hierarchy when requesting an item might be a solution.

  was:
currently there are 2 configuration parameters which affect the performance of client-sided tree traversals:

- fetch-depth
- size of item cache

my goal is to minimize the numbers of server-roundtrips triggered by traversing the node hierarchy in the client.

the current eviction policy doesn't seem to be ideal for this use case. in the case of relatively deep tree structures
a request for e.g. '/foo' can easily cause a cache overflow and root nodes might get evicted from the cache.
a following request to '/foo' cannot be served from cache but will cause a deep fetch again, depsite the fact
that the major part of the tree structure is still in the cache.

increasing the cache size OTOH bears the risk of OOM errors since the memory footprint of the cached state seems to be quite large.

using a LRU eviction policy and touching every node along the parent hierarchy when requesting an item might be a solution.


> make internal item cache hierarchy-aware
> ----------------------------------------
>
>                 Key: JCR-2442
>                 URL: https://issues.apache.org/jira/browse/JCR-2442
>             Project: Jackrabbit Content Repository
>          Issue Type: Improvement
>          Components: jackrabbit-jcr2spi
>            Reporter: Stefan Guggisberg
>            Assignee: Michael Dürig
>
> currently there are 2 configuration parameters which affect the performance of client-sided tree traversals:
> - fetch-depth
> - size of item cache
> my goal is to minimize the number of server-roundtrips triggered by traversing the node hierarchy on the client.
> the current eviction policy doesn't seem to be ideal for this use case. in the case of relatively deep tree structures
> a request for e.g. '/foo' can easily cause a cache overflow and root nodes might get evicted from the cache.
> a following request to '/foo' cannot be served from cache but will trigger yet another deep fetch, despite the fact
> that the major part of the tree structure is still in the cache.
> increasing the cache size OTOH bears the risk of OOM errors since the memory footprint of the cached state seems 
> to be quite large. i tried several combinations of fetch depth and cache size, to no avail. i either ran into OOM errors 
> or performance was inacceptably slow due to an excessive number of server roundtrips.
> i further noticed that sync'ing existing cached state with the results of a deep fetch is rather slow, e.g.
> an inital request to '/foo' returns 11k items. the cache size is 10k, i.e. the cache cannot accomodate the entire 
> result set.  assuming that /foo has been evicted, the following request to '/foo' will trigger another deep 
> fetch which this time takes considerably moe time since the result set needs to be sync'ed with existing cached
> state. 
> using a LRU eviction policy and touching every node along the parent hierarchy when requesting an item might be a solution.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.