You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@commons.apache.org by "Maurizio Cucchiara (Commented) (JIRA)" <ji...@apache.org> on 2011/10/10 10:54:30 UTC

[jira] [Commented] (OGNL-20) Performance - Replace synchronized blocks with ReentrantReadWriteLock

    [ https://issues.apache.org/jira/browse/OGNL-20?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13123959#comment-13123959 ] 

Maurizio Cucchiara commented on OGNL-20:
----------------------------------------

Hi guys,
I'm trying to apply a revisited version of Daniel's ConcurrentHashMap. 
During my code refactoring, I realized that CacheEntryFactory needs to throw an exception (since it's very easy that the instantiation of an element or even its building throws an exception) .
So I chose to add a new CacheException as a new subclass of the existing OgnlException.
As a result, I got a general need-to-catch-exception proliferation, such that most of the OgnlRuntime methods would have to change their signature (and so on the other dependencies). Generally speaking, I have no problem with it, an api which provides a custom exception sounds good to me, but I would like to know, before I proceed, if this is the right way to follow or there is a better way. 
For help my explanation, I'm attaching some pieces of code which represent the new cache access:
{code:java}
public interface CacheEntryFactory<T, V>
{
    public V create( T key )
        throws CacheException;
}

{code}
{code:java}
    public static Map<String, PropertyDescriptor> getPropertyDescriptors( final Class<?> targetClass )
        throws IntrospectionException, **OgnlException**
    {
        return _propertyDescriptorCache.get( targetClass, new ClassCacheEntryFactory<Map<String, PropertyDescriptor>>( )
        {
            public Map<String, PropertyDescriptor> create( Class<?> key )
                throws CacheException
            {
                Map<String, PropertyDescriptor> result = new HashMap<String, PropertyDescriptor>( 101 );
                PropertyDescriptor[] pda;
                try
                {
                    pda = Introspector.getBeanInfo( targetClass ).getPropertyDescriptors( );

                    .......
                }
                catch ( IntrospectionException e )
                {
                    throw new CacheException( e );
                }
                catch ( OgnlException e )
                {
                    throw new CacheException( e );
                }
                return result;
            }
        } );
    }
{code}

WDYT?
                
> Performance - Replace synchronized blocks with ReentrantReadWriteLock
> ---------------------------------------------------------------------
>
>                 Key: OGNL-20
>                 URL: https://issues.apache.org/jira/browse/OGNL-20
>             Project: OGNL
>          Issue Type: Improvement
>         Environment: ALL
>            Reporter: Greg Lively
>         Attachments: Bench Results.txt, Caching_Mechanism_Benchmarks.patch
>
>
> I've noticed a lot of synchronized blocks of code in OGNL. For the most part, these synchronized blocks are controlling access to HashMaps, etc. I believe this could be done far better using ReentrantReadWriteLocks. ReentrantReadWriteLock allows unlimited concurrent access, and single threads only for writes. Perfect in an environment where the ratio of reads  is far higher than writes; which is typically the scenario for caching. Plus the access control can be tuned for reads and writes; not just a big synchronized{} wrapping a bunch of code.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira