You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@commons.apache.org by Mat <ma...@gmail.com> on 2006/12/12 11:45:54 UTC

1.3 borrowObject synchronized around the factory methods

Hi,
I've seen that the borrowObject of the GenericObjectPool release 1.3 has 
became synchronized around the
factory methods.
So If I am not wrong in this way one single borrowObject that calls one 
slow makeObject, of
one connection for example, will block the entire pool from releasing 
objects?
Maybe I misunderstood the usage of the new borrowObject in the 1.3
release, but if this is the case I think that there will be different
problems in upgrading older dev and if the borrowobject blocks on the 
factory
methods there will be an huge performance issue.

Cheers
Mat

---------------------------------------------------------------------
To unsubscribe, e-mail: commons-dev-unsubscribe@jakarta.apache.org
For additional commands, e-mail: commons-dev-help@jakarta.apache.org


Re: 1.3 borrowObject synchronized around the factory methods

Posted by Sandy McArthur <sa...@apache.org>.
I had missed it.

I'm very much aware of the bottleneck and I feel it's pain too. But
when the trade off is thread-safety or performance I'm going to pick
correct behavior over speed for the pool implementations provided by
Apache. It's irresponsible of us to intentionally ship code that may
spuriously break.

You have a couple of choices:

1. Use your own pool implementation that isn't thread-safe or doesn't
implement as many features. The Apache License allows you to take the
existing source and modify it to your own needs so you don't have to
start from scratch. A highly concurrent thread-safe pool that *does
not* implement maxActive/maxIdle/minIdle or similar features is rather
easy to implement, just synchronize access to the internal List.

2. Provide a patch that is both thread-safe, backwards compatible, and
performant. The backwards compatible part will be very hard because of
the way the existing pools are implemented in Pool 1.3 and below. You
can check out the trunk for Pool 2 and see the composite pool
implementation that is designed to be easier to totally refactor and
improve because it prohibits sub-classing.

On 12/18/06, Mat <ma...@gmail.com> wrote:
> Hi Sandy,
> have you read my previous mail?
>
> Kindly regards
> Matija
>
> Mat ha scritto:
> > Hi Sandy,
> > thanks for the reply.
> >
> >
> > Moving to the new version, basically, the code has switched from
> > fine-grain locks to a "big lock" that conveys almost all the calls
> > through the same critical region. This assumes that the time spent
> > into the factory API is negligible, which may be the case for certain
> > kind of applications but is not the general one.
> >
> > In example, when you are managing socket resources, creation,
> > validation and destruction times are significant when compared with
> > the service time taken on pool resource by the application.
> > In these cases, the big lock jeopardizes the performances, as you can
> > spend as much time as in your application just waiting the validation
> > or creation process within the factory, which is under the big lock
> > itself.
> >
> > Even if we remove the validating time, the creation time of the
> > objects is time spent in the critical region. This implies that when
> > you have to re-populate the entries for any reason, you end up
> > accessing sequentially to the pool itself, i.e., only one thread at
> > time is able to actually process the application code.
> >
> > Please take as example this real world case:
> >
> >    High load server that has to serve 50 parallel requests *assuring
> > valid objects*(avg validate time 1000ms) with an slow makeObject(avg
> > timeout 2500 ms).
> >    Asuming an "out of the pool" AVG  request serving time of 2000 ms,
> > as not important because out of lock, we will have this behaviours:
> >        In worst condition, in which we have not valid objects and we
> > must to recreate them, (for example whan the beckend is restarted) we
> > will have this avarage wait time for resource:
> >            1^st req:    wait time: 2500ms + 1000ms
> >            2^nd req:   wait time: 1^st req wait time + 2500ms + 1000ms
> >            n^nd req:   wait time: (SUM from 1 to n-1) req wait time +
> > 2500ms + 1000ms
> > *        Req avg wait time synchronizing around the factory:* (SUM for
> > i from 1 to N(i*3500ms ))/N=(3500ms*(50(50+1)/2))/50=*89,250 seconds*
> > *        Req avg **wait time NOT** synchronizing around the factory:*
> > 2500ms + 1000ms= *3,5 seconds*
> >
> >    If we consider the optimal case, considering only the validate time:
> >
> > *        Req avg wait time **synchronizing around the factory: *(SUM
> > for i from 1 to N(i*1000ms ))/N=(1000ms*(50(50+1)/2))/50 =*25 seconds*
> > *        Req avg wait time **NOT synchronizing around the factory:*
> > 1000ms = *1 second*
> >
> > I hope I didn't make calculation mistakes :-)
> >
> > In my example above I am not considering also other real world
> > factors, as for example occuring of long validate/make objects ...
> > that would abate further the performances.
> >
> > I am not talking about micro-benchmarks but about real world
> > production macro-systems behaviour experience when pooling sockets
> > resources.
> >
> > Thanks Again and Cheers
> > Mat
> >
> > Sandy McArthur ha scritto:
> >> On 12/12/06, Mat <ma...@gmail.com> wrote:
> >>> if the borrowobject blocks on the factory
> >>> methods there will be an huge performance issue.
> >>
> >> Yes, but without it there are thread-safety issues. I agree, it's a
> >> performance issue, but I'm not sure how huge it is except under
> >> micro-benchmarks.

-- 
Sandy McArthur

"He who dares not offend cannot be honest."
- Thomas Paine

---------------------------------------------------------------------
To unsubscribe, e-mail: commons-dev-unsubscribe@jakarta.apache.org
For additional commands, e-mail: commons-dev-help@jakarta.apache.org


Re: 1.3 borrowObject synchronized around the factory methods

Posted by Mat <ma...@gmail.com>.
Hi Sandy,
have you read my previous mail?

Kindly regards
Matija

Mat ha scritto:
> Hi Sandy,
> thanks for the reply.
>
>
> Moving to the new version, basically, the code has switched from 
> fine-grain locks to a "big lock" that conveys almost all the calls 
> through the same critical region. This assumes that the time spent 
> into the factory API is negligible, which may be the case for certain 
> kind of applications but is not the general one.
>
> In example, when you are managing socket resources, creation, 
> validation and destruction times are significant when compared with 
> the service time taken on pool resource by the application.
> In these cases, the big lock jeopardizes the performances, as you can 
> spend as much time as in your application just waiting the validation 
> or creation process within the factory, which is under the big lock 
> itself.
>
> Even if we remove the validating time, the creation time of the 
> objects is time spent in the critical region. This implies that when 
> you have to re-populate the entries for any reason, you end up 
> accessing sequentially to the pool itself, i.e., only one thread at 
> time is able to actually process the application code.
>
> Please take as example this real world case:
>
>    High load server that has to serve 50 parallel requests *assuring 
> valid objects*(avg validate time 1000ms) with an slow makeObject(avg 
> timeout 2500 ms).
>    Asuming an "out of the pool" AVG  request serving time of 2000 ms, 
> as not important because out of lock, we will have this behaviours:
>        In worst condition, in which we have not valid objects and we 
> must to recreate them, (for example whan the beckend is restarted) we 
> will have this avarage wait time for resource:
>            1^st req:    wait time: 2500ms + 1000ms
>            2^nd req:   wait time: 1^st req wait time + 2500ms + 1000ms
>            n^nd req:   wait time: (SUM from 1 to n-1) req wait time + 
> 2500ms + 1000ms
> *        Req avg wait time synchronizing around the factory:* (SUM for 
> i from 1 to N(i*3500ms ))/N=(3500ms*(50(50+1)/2))/50=*89,250 seconds*
> *        Req avg **wait time NOT** synchronizing around the factory:* 
> 2500ms + 1000ms= *3,5 seconds*
>
>    If we consider the optimal case, considering only the validate time:
>
> *        Req avg wait time **synchronizing around the factory: *(SUM 
> for i from 1 to N(i*1000ms ))/N=(1000ms*(50(50+1)/2))/50 =*25 seconds*
> *        Req avg wait time **NOT synchronizing around the factory:* 
> 1000ms = *1 second*
>
> I hope I didn't make calculation mistakes :-)
>
> In my example above I am not considering also other real world 
> factors, as for example occuring of long validate/make objects ... 
> that would abate further the performances.
>
> I am not talking about micro-benchmarks but about real world 
> production macro-systems behaviour experience when pooling sockets 
> resources.
>
> Thanks Again and Cheers
> Mat
>
> Sandy McArthur ha scritto:
>> On 12/12/06, Mat <ma...@gmail.com> wrote:
>>> if the borrowobject blocks on the factory
>>> methods there will be an huge performance issue.
>>
>> Yes, but without it there are thread-safety issues. I agree, it's a
>> performance issue, but I'm not sure how huge it is except under
>> micro-benchmarks.
>>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: commons-dev-unsubscribe@jakarta.apache.org
> For additional commands, e-mail: commons-dev-help@jakarta.apache.org
>
>

---------------------------------------------------------------------
To unsubscribe, e-mail: commons-dev-unsubscribe@jakarta.apache.org
For additional commands, e-mail: commons-dev-help@jakarta.apache.org


Re: 1.3 borrowObject synchronized around the factory methods

Posted by Mat <ma...@gmail.com>.
Hi Sandy,
thanks for the reply.


Moving to the new version, basically, the code has switched from 
fine-grain locks to a "big lock" that conveys almost all the calls 
through the same critical region. This assumes that the time spent into 
the factory API is negligible, which may be the case for certain kind of 
applications but is not the general one.
 
In example, when you are managing socket resources, creation, validation 
and destruction times are significant when compared with the service 
time taken on pool resource by the application.
In these cases, the big lock jeopardizes the performances, as you can 
spend as much time as in your application just waiting the validation or 
creation process within the factory, which is under the big lock itself.
 
Even if we remove the validating time, the creation time of the objects 
is time spent in the critical region. This implies that when you have to 
re-populate the entries for any reason, you end up accessing 
sequentially to the pool itself, i.e., only one thread at time is able 
to actually process the application code.

Please take as example this real world case:

    High load server that has to serve 50 parallel requests *assuring 
valid objects*(avg validate time 1000ms) with an slow makeObject(avg 
timeout 2500 ms).
    Asuming an "out of the pool" AVG  request serving time of 2000 ms, 
as not important because out of lock, we will have this behaviours:
        In worst condition, in which we have not valid objects and we 
must to recreate them, (for example whan the beckend is restarted) we 
will have this avarage wait time for resource:
            1^st req:    wait time: 2500ms + 1000ms
            2^nd req:   wait time: 1^st req wait time + 2500ms + 1000ms
            n^nd req:   wait time: (SUM from 1 to n-1) req wait time + 
2500ms + 1000ms
*        Req avg wait time synchronizing around the factory:* (SUM for i 
from 1 to N(i*3500ms ))/N=(3500ms*(50(50+1)/2))/50=*89,250 seconds*
*        Req avg **wait time NOT** synchronizing around the factory:* 
2500ms + 1000ms= *3,5 seconds*

    If we consider the optimal case, considering only the validate time:

*        Req avg wait time **synchronizing around the factory: *(SUM for 
i from 1 to N(i*1000ms ))/N=(1000ms*(50(50+1)/2))/50 =*25 seconds*
*        Req avg wait time **NOT synchronizing around the factory:* 
1000ms = *1 second*

I hope I didn't make calculation mistakes :-)

In my example above I am not considering also other real world factors, 
as for example occuring of long validate/make objects ... that would 
abate further the performances.

I am not talking about micro-benchmarks but about real world production 
macro-systems behaviour experience when pooling sockets resources.

Thanks Again and Cheers
Mat

Sandy McArthur ha scritto:
> On 12/12/06, Mat <ma...@gmail.com> wrote:
>> if the borrowobject blocks on the factory
>> methods there will be an huge performance issue.
>
> Yes, but without it there are thread-safety issues. I agree, it's a
> performance issue, but I'm not sure how huge it is except under
> micro-benchmarks.
>

---------------------------------------------------------------------
To unsubscribe, e-mail: commons-dev-unsubscribe@jakarta.apache.org
For additional commands, e-mail: commons-dev-help@jakarta.apache.org


Re: 1.3 borrowObject synchronized around the factory methods

Posted by Sandy McArthur <sa...@apache.org>.
On 12/12/06, Mat <ma...@gmail.com> wrote:
> if the borrowobject blocks on the factory
> methods there will be an huge performance issue.

Yes, but without it there are thread-safety issues. I agree, it's a
performance issue, but I'm not sure how huge it is except under
micro-benchmarks.

-- 
Sandy McArthur

"He who dares not offend cannot be honest."
- Thomas Paine

---------------------------------------------------------------------
To unsubscribe, e-mail: commons-dev-unsubscribe@jakarta.apache.org
For additional commands, e-mail: commons-dev-help@jakarta.apache.org