You are viewing a plain text version of this content. The canonical link for it is here.
Posted to oak-issues@jackrabbit.apache.org by "Chetan Mehrotra (JIRA)" <ji...@apache.org> on 2017/03/27 05:43:41 UTC
[jira] [Updated] (OAK-2667) Add new tooling to troubleshot mongodb
slow queries
[ https://issues.apache.org/jira/browse/OAK-2667?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Chetan Mehrotra updated OAK-2667:
---------------------------------
Component/s: (was: core)
documentmk
> Add new tooling to troubleshot mongodb slow queries
> ---------------------------------------------------
>
> Key: OAK-2667
> URL: https://issues.apache.org/jira/browse/OAK-2667
> Project: Jackrabbit Oak
> Issue Type: New Feature
> Components: documentmk
> Affects Versions: 1.0.12
> Reporter: Thierry Ygé
> Priority: Minor
>
> currently some customer reported Hugely slow query at MongoDB level.
> 2015-03-16T20:01:24.870+0100 [conn12] query aem-author.nodes query: { $query: { id:
> Unknown macro: { $gt}
> , modified:
> Unknown macro: { $gte}
> }, $orderby:
> Unknown macro: { id}
> , $hint:
> Unknown macro: { id}
> } planSummary: IXSCAN
> Unknown macro: { _id}
> ntoreturn:0 ntoskip:0 nscanned:9021956 nscannedObjects:9021956 keyUpdates:0 numYields:251409 locks(micros) r:553377233 nreturned:13 reslen:5737 1513722ms
> 2015-03-18T14:14:15.903+0100 [conn52] query aem-author.nodes query: { $query: { id:
> Unknown macro: { $gt}
> , modified:
> Unknown macro: { $gte}
> }, $orderby:
> Unknown macro: { id}
> , $hint:
> Unknown macro: { id}
> } planSummary: IXSCAN
> Unknown macro: { _id}
> ntoreturn:0 ntoskip:0 nscanned:9047318 nscannedObjects:9047318 keyUpdates:0 numYields:223663 locks(micros) r:390010493 nreturned:73 reslen:34275 1229400ms
> When this is happening it has a business impact because during the 30 minutes of execution of the slow query the author instance is unusable.
> It would be nice to have a similar jmx mbean to track slow query in jcr but here to track the one performed on mongodb and keep a stacktrace then for those long query to find the culprit code that triggered this query. A threshold parameter could be defined to fix a limit response time for the query so that it get tracked.
> On log level I think there are some "perflogger" existing, but not sure if those can include a stacktrace (maybe in TRACE level mode) for such purpose. That would help to quickly identify the thread and code that is behind the long running query in mongodb.
--
This message was sent by Atlassian JIRA
(v6.3.15#6346)