You are viewing a plain text version of this content. The canonical link for it is here.
Posted to oak-issues@jackrabbit.apache.org by "Marcel Reutegger (JIRA)" <ji...@apache.org> on 2017/07/03 07:13:00 UTC

[jira] [Resolved] (OAK-6180) Tune cursor batch/limit size

     [ https://issues.apache.org/jira/browse/OAK-6180?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Marcel Reutegger resolved OAK-6180.
-----------------------------------
       Resolution: Fixed
    Fix Version/s: 1.7.3

Applied patch to trunk: http://svn.apache.org/r1800596

> Tune cursor batch/limit size
> ----------------------------
>
>                 Key: OAK-6180
>                 URL: https://issues.apache.org/jira/browse/OAK-6180
>             Project: Jackrabbit Oak
>          Issue Type: Improvement
>          Components: mongomk
>            Reporter: Marcel Reutegger
>            Assignee: Marcel Reutegger
>             Fix For: 1.8, 1.7.3
>
>         Attachments: OAK-6180.patch
>
>
> MongoDocumentStore uses the default batch size, which means MongoDB will initially get 100 documents and then as many documents as fit into 4MB. Depending on the document size, the number of documents may be quite high and the risk of running into the 60 seconds query timeout defined by Oak increases.
> Tuning the batch size (or using a limit) may also be helpful in optimizing the amount of data transferred from MongoDB to Oak. The DocumentNodeStore fetches child nodes in batches as well. The logic there is slightly different. The initial batch size is 100 and every subsequent batch doubles in size until it reaches 1600. Bandwidth is wasted if the MongoDB Java driver fetches way more than requested by Oak.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)