You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@commons.apache.org by "Wiktor N (JIRA)" <ji...@apache.org> on 2015/03/18 22:02:39 UTC

[jira] [Updated] (JCS-144) BlockDiskCache hangs on SEVERE: Region [TMS] Failure getting from disk--IOException

     [ https://issues.apache.org/jira/browse/JCS-144?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Wiktor N updated JCS-144:
-------------------------
    Attachment: BlockDiskCache.java.patch

Fix bug, in a similar way, as it is done in IndexDiskCache

> BlockDiskCache hangs on SEVERE: Region [TMS] Failure getting from disk--IOException
> -----------------------------------------------------------------------------------
>
>                 Key: JCS-144
>                 URL: https://issues.apache.org/jira/browse/JCS-144
>             Project: Commons JCS
>          Issue Type: Bug
>          Components: Indexed Disk Cache
>    Affects Versions: jcs-2.0-alpha-2
>         Environment: version 1.7.0_75, vendor Oracle Corporation
>            Reporter: Wiktor N
>         Attachments: BlockDiskCache.java.patch
>
>
> If I get a failure reading an object from cache, the thread locks, with following stack trace:
> "AWT-EventQueue-0" prio=6 tid=0x00000000112aa000 nid=0x1444 waiting on condition [0x0000000013dc9000]
>    java.lang.Thread.State: WAITING (parking)
> 	at sun.misc.Unsafe.park(Native Method)
> 	- parking to wait for  <0x00000007cbf84b00> (a java.util.concurrent.locks.ReentrantReadWriteLock$NonfairSync)
> 	at java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
> 	at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:834)
> 	at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireQueued(AbstractQueuedSynchronizer.java:867)
> 	at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(AbstractQueuedSynchronizer.java:1197)
> 	at java.util.concurrent.locks.ReentrantReadWriteLock$WriteLock.lock(ReentrantReadWriteLock.java:945)
> 	at org.apache.commons.jcs.auxiliary.disk.block.BlockDiskCache.reset(BlockDiskCache.java:643)
> 	at org.apache.commons.jcs.auxiliary.disk.block.BlockDiskCache.processGet(BlockDiskCache.java:343)
> 	at org.apache.commons.jcs.auxiliary.AbstractAuxiliaryCacheEventLogging.getWithEventLogging(AbstractAuxiliaryCacheEventLogging.java:109)
> 	at org.apache.commons.jcs.auxiliary.disk.AbstractDiskCache.doGet(AbstractDiskCache.java:771)
> 	at org.apache.commons.jcs.auxiliary.disk.AbstractDiskCache.get(AbstractDiskCache.java:279)
> 	at org.apache.commons.jcs.engine.control.CompositeCache.get(CompositeCache.java:550)
> 	- locked <0x00000007cbf62c00> (a org.apache.commons.jcs.engine.control.CompositeCache)
> 	at org.apache.commons.jcs.engine.control.CompositeCache.get(CompositeCache.java:455)
> 	at org.apache.commons.jcs.access.CacheAccess.getCacheElement(CacheAccess.java:125)
> Looing at the code in BlockDiskCache I see, that we first acquire read lock in processGet, and then try to acquire write lock in reset(). According to API spec: http://docs.oracle.com/javase/7/docs/api/java/util/concurrent/locks/ReentrantReadWriteLock.html
> It is unsupported to upgrade an lock from read to write ("Additionally, a writer can acquire the read lock, but not vice-versa").
> Either read lock should be dropped early on exception or different lock mechanism should be used



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)