You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@jena.apache.org by "Bala Kolla (JIRA)" <ji...@apache.org> on 2014/11/11 05:26:34 UTC

[jira] [Updated] (JENA-801) When the server is under load, many queries are piling up and seems to be in some kind of dead lock.

     [ https://issues.apache.org/jira/browse/JENA-801?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Bala Kolla updated JENA-801:
----------------------------
    Attachment: TracesWithManyItersOfBlockMgrJournal_Valid_Method.txt
                TracesWithManyItersOfBlockMgrJournal_getRead_Method.txt

With the latest changes to use the GuavaCache in nodetable cache and the BlockMgrCache, I am not seeing much lock contention but seeing many queries piling with very long loops of calls on BlockMgrJournal and this is leading to CPU. I am attaching the thread stack trace for two queries.

> When the server is under load, many queries are piling up and seems to be in some kind of dead lock.
> ----------------------------------------------------------------------------------------------------
>
>                 Key: JENA-801
>                 URL: https://issues.apache.org/jira/browse/JENA-801
>             Project: Apache Jena
>          Issue Type: Bug
>          Components: TDB
>    Affects Versions: TDB 0.9.4, Jena 2.11.2
>            Reporter: Bala Kolla
>         Attachments: ThreadLocksInBlockMgrJournalAfterGuavaCacheInNodeTable.htm, TracesWithManyItersOfBlockMgrJournal_Valid_Method.txt, TracesWithManyItersOfBlockMgrJournal_getRead_Method.txt, WAITDataReportShowingTheLockContention.zip, WAITDataReportShowingTheLockContentionWithoutQueryFilter.zip
>
>
> We were testing our server with repositories of varied sizes and in almost all the cases when the server peaks its capacity (of maximum number of users it can support), It seems like the queries are piling up because of the lock contention in NodeTableCache.
> Here are some details about the repository..
> size of indices on disk - 150GB
> type of hard disk used - SSD and HDD with high RAM (seeing the same result in both the cases)
> OS - Linux
> Details on the user load;
> We are trying to simulate a very active user load where all the users are executing many usecases that would result in many queries and updates on TDB.
> I would like to know what are the possible solutions to work around and avoid this situation. I am thinking of the following, please let me know if there is any other way to work around this bottleneck.
> Control the updates to the triple store so that we only do it when there are not many queries pending. We would have to experiment how this impact the usecases..
> Is there any other way to make this lock contention go away? Can we have multiple instances of this cache? For example many (90%) of our queries are executed with a query scope (per project). So, can we have a separate NodeTable cache for each query scope (project in our case) and one for global? 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)