You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@jena.apache.org by "Rob Vesse (JIRA)" <ji...@apache.org> on 2014/10/21 13:30:34 UTC

[jira] [Comment Edited] (JENA-801) When the server is under load, many queries are piling up and seems to be in some kind of dead lock.

    [ https://issues.apache.org/jira/browse/JENA-801?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14178288#comment-14178288 ] 

Rob Vesse edited comment on JENA-801 at 10/21/14 11:30 AM:
-----------------------------------------------------------

Another design choice that might be viable would be to implement a tiered node cache (whether using Guava or the existing cache).

In this design there would be a {{ThreadLocal}} of caches and each thread would consult its own per-thread cache first (which could be lock free) and then only lock on the central shared cache if a miss occurs.  If the central cache has to be consulted the value can then be copied into the per-thread cache for future lock-free reuse.

This could be implemented as a decorator allowing it to be used around an arbitrary cache implementation.


was (Author: rvesse):
Another design choice that might be viable would be to implement a tiered node cache (whether using Guava or the existing cache).

In this design there would be a {{ThreadLocal}} of caches and each thread would use its own cache first (which could be lock free) and then only lock on the central shared cache if a miss occurs.  If the central cache has to be consulted the value can then be copied into the per-thread cache for future reuse

This could be implemented as a decorator allowing it to be used around an arbitrary cache implementation.

> When the server is under load, many queries are piling up and seems to be in some kind of dead lock.
> ----------------------------------------------------------------------------------------------------
>
>                 Key: JENA-801
>                 URL: https://issues.apache.org/jira/browse/JENA-801
>             Project: Apache Jena
>          Issue Type: Bug
>          Components: TDB
>    Affects Versions: TDB 0.9.4, Jena 2.11.2
>            Reporter: Bala Kolla
>         Attachments: WAITDataReportShowingTheLockContention.zip, WAITDataReportShowingTheLockContentionWithoutQueryFilter.zip
>
>
> We were testing our server with repositories of varied sizes and in almost all the cases when the server peaks its capacity (of maximum number of users it can support), It seems like the queries are piling up because of the lock contention in NodeTableCache.
> Here are some details about the repository..
> size of indices on disk - 150GB
> type of hard disk used - SSD and HDD with high RAM (seeing the same result in both the cases)
> OS - Linux
> Details on the user load;
> We are trying to simulate a very active user load where all the users are executing many usecases that would result in many queries and updates on TDB.
> I would like to know what are the possible solutions to work around and avoid this situation. I am thinking of the following, please let me know if there is any other way to work around this bottleneck.
> Control the updates to the triple store so that we only do it when there are not many queries pending. We would have to experiment how this impact the usecases..
> Is there any other way to make this lock contention go away? Can we have multiple instances of this cache? For example many (90%) of our queries are executed with a query scope (per project). So, can we have a separate NodeTable cache for each query scope (project in our case) and one for global? 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)