You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@openwebbeans.apache.org by Romain Manni-Bucau <rm...@gmail.com> on 2013/01/19 17:44:20 UTC

InstanceImpl & cache?

Hi guys,

is there any reason
why org.apache.webbeans.inject.instance.InstanceImpl#resolveBeans() is not
cached in the instance of InstanceImpl (whatever the cache is - a volatile
var, an atomicref or even a synchronized block)?

typically doing this pattern: if (!instance.isAmbiguous()) { return
instance.get(); } you'll call it twice for nothing (and it can be O(n) with
n the numbe rof beans)...or just regarding runtime it seems weird no?

wdyt?

*Romain Manni-Bucau*
*Twitter: @rmannibucau <https://twitter.com/rmannibucau>*
*Blog: **http://rmannibucau.wordpress.com/*<http://rmannibucau.wordpress.com/>
*LinkedIn: **http://fr.linkedin.com/in/rmannibucau*
*Github: https://github.com/rmannibucau*

Re: InstanceImpl & cache?

Posted by Romain Manni-Bucau <rm...@gmail.com>.
weird result: using a ConcurrentSkipListMap instead of ConcurrentHashMap
for the instancesCache makes it deterministic with ~1000ms for instanc
eusage in the bench

*Romain Manni-Bucau*
*Twitter: @rmannibucau <https://twitter.com/rmannibucau>*
*Blog: **http://rmannibucau.wordpress.com/*<http://rmannibucau.wordpress.com/>
*LinkedIn: **http://fr.linkedin.com/in/rmannibucau*
*Github: https://github.com/rmannibucau*



2013/1/21 Romain Manni-Bucau <rm...@gmail.com>

> here the current patch: https://gist.github.com/4585407
>
> it is a bit weird since if i run the bench directly i have ~2s for
> instance usage (better) and if i debug
> in org.apache.webbeans.inject.instance.InstanceImpl#select just after the
> cache test (i do a pause for the init) i have < 800ms!!!!
>
> any idea or feedback welcome
>
> 2013/1/21 Arne Limburg <ar...@openknowledge.de>
>
>> resolveBeans
>
>
>
>
> *Romain Manni-Bucau*
> *Twitter: @rmannibucau <https://twitter.com/rmannibucau>*
> *Blog: **http://rmannibucau.wordpress.com/*<http://rmannibucau.wordpress.com/>
> *LinkedIn: **http://fr.linkedin.com/in/rmannibucau*
> *Github: https://github.com/rmannibucau*
>
>

Re: InstanceImpl & cache?

Posted by Romain Manni-Bucau <rm...@gmail.com>.
here the current patch: https://gist.github.com/4585407

it is a bit weird since if i run the bench directly i have ~2s for instance
usage (better) and if i debug
in org.apache.webbeans.inject.instance.InstanceImpl#select just after the
cache test (i do a pause for the init) i have < 800ms!!!!

any idea or feedback welcome

2013/1/21 Arne Limburg <ar...@openknowledge.de>

> resolveBeans




*Romain Manni-Bucau*
*Twitter: @rmannibucau <https://twitter.com/rmannibucau>*
*Blog: **http://rmannibucau.wordpress.com/*<http://rmannibucau.wordpress.com/>
*LinkedIn: **http://fr.linkedin.com/in/rmannibucau*
*Github: https://github.com/rmannibucau*

Re: InstanceImpl & cache?

Posted by Arne Limburg <ar...@openknowledge.de>.
What happens from a performance perspective, if you create the
BeanCacheKey directly from the parameters of implResolveByType(Type
injectionPointType, Class<?> injectinPointClass, Annotation... Qualifiers)?
We can't be faster than that (and of course remove the array copy and sort
operations from the BeanCacheKey implementation).
If that is not fast enough, I would go with the caching of the result of
InstanceImpl#resolveBeans
The cache should be a transient, not synchronized or volatile class
attribute, since the result will never change and computing it twice is
not slower than the current solutionŠ

Cheers,
Arne


Am 21.01.13 11:21 schrieb "Romain Manni-Bucau" unter
<rm...@gmail.com>:

>i tried several enhancements in BeancacheKey but none was enough (maybe i
>didnt cut the right part)
>
>*Romain Manni-Bucau*
>*Twitter: @rmannibucau <https://twitter.com/rmannibucau>*
>*Blog: 
>**http://rmannibucau.wordpress.com/*<http://rmannibucau.wordpress.com/>
>*LinkedIn: **http://fr.linkedin.com/in/rmannibucau*
>*Github: https://github.com/rmannibucau*
>
>
>
>2013/1/21 Arne Limburg <ar...@openknowledge.de>
>
>> Both solutions seem not to be perfect.
>>
>>
>> What about improving the BeanCacheKey performance?
>>
>> The creation of the bean cache key uses many computations, what about
>> simply removing them, i.e. if we remove the sorting of the qualifiers,
>>it
>> would potentially end up in more cache entries for the same set of
>> annotations, but how often does this occur in practice, event the
>>copying
>> of the array could be removed.
>> Maybe also the path to the bda should be replaced by the injection point
>> class. This would blow up the cache, but maybe the performance
>>improvement
>> is it worth?
>>
>> Cheers,
>> Arne
>>
>> Am 21.01.13 10:51 schrieb "Romain Manni-Bucau" unter
>> <rm...@gmail.com>:
>>
>> >well didn't manage to find a lot of optim. The only relevant one i
>>found
>> >consists in 2 steps:
>> >
>> >1) cache the BeanCacheKey used for the instance
>> >2) cache results of select
>> >
>> >here a proto (patch for trunk): https://gist.github.com/4584921
>> >
>> >it solves pretty well the isAmbiguous() get() pattern
>> >
>> >here the perf:
>> >
>> >direct: 21ms
>> >instance: 3257ms
>> >direct: 0ms
>> >instance: 2670ms
>> >direct: 0ms
>> >instance: 2790ms
>> >direct: 0ms
>> >instance: 2540ms
>> >direct: 0ms
>> >instance: 2717ms
>> >direct: 0ms
>> >instance: 2790ms
>> >
>> >do you think it's worth to use it or sthg close?
>> >
>> >*Romain Manni-Bucau*
>> >*Twitter: @rmannibucau <https://twitter.com/rmannibucau>*
>> >*Blog:
>> >**http://rmannibucau.wordpress.com/*<http://rmannibucau.wordpress.com/>
>> >*LinkedIn: **http://fr.linkedin.com/in/rmannibucau*
>> >*Github: https://github.com/rmannibucau*
>> >
>> >
>> >
>> >2013/1/21 Romain Manni-Bucau <rm...@gmail.com>
>> >
>> >> you are right the cache is well done...but we use it.
>> >>
>> >> let me explain: if i use a direct injection i have the instance
>>itself
>> >>and
>> >> all is fine. If not we go through the resolution and as you said we
>>use
>> >>the
>> >> cache.
>> >>
>> >> To use the cache we use a key (since that's a map) and the key is a
>> >> BeanCacheKey: that's what takes time: mainly its construction
>> >>
>> >> I'll have a look if it can be enhanced a bit
>> >>
>> >> FYI here is my test:
>> >>
>> >> public class TmpInstanceImpl extends AbstractUnitTest {
>> >>     @Test
>> >>     public void perf()
>> >>     {
>> >>
>> >>         final Collection<Class<?>> classes = new
>>ArrayList<Class<?>>();
>> >>         classes.add(Foo.class);
>> >>         classes.add(DirectInjection.class);
>> >>         classes.add(InstanceInjection.class);
>> >>
>> >>         startContainer(classes);
>> >>
>> >>         final Bean<?> dir =
>> >>
>> 
>>>>getBeanManager().resolve(getBeanManager().getBeans(DirectInjection.clas
>>>>s)
>> >>);
>> >>         final DirectInjection di = (DirectInjection)
>> >> getBeanManager().getReference(dir, DirectInjection.class,
>> >> getBeanManager().createCreationalContext(dir));
>> >>         Assert.assertNotNull(di);
>> >>
>> >>         final Bean<?> inst =
>> >>
>> 
>>>>getBeanManager().resolve(getBeanManager().getBeans(InstanceInjection.cl
>>>>as
>> >>s));
>> >>         final InstanceInjection ii = (InstanceInjection)
>> >> getBeanManager().getReference(inst, InstanceInjection.class,
>> >> getBeanManager().createCreationalContext(inst));
>> >>         Assert.assertNotNull(ii);
>> >>
>> >>         long start, end;
>> >>
>> >>         for (int i = 0; i < 10; i++) {
>> >>             start = System.nanoTime();
>> >>             for (int k = 0; k < 1000000; k++) {
>> >>                 di.resolve();
>> >>             }
>> >>             end = System.nanoTime();
>> >>             System.out.println("direct: " +
>> >> TimeUnit.NANOSECONDS.toMillis(end - start) + "ms");
>> >>
>> >>             start = System.nanoTime();
>> >>             for (int k = 0; k < 1000000; k++) {
>> >>                 ii.resolve();
>> >>             }
>> >>             end = System.nanoTime();
>> >>             System.out.println("instance: " +
>> >> TimeUnit.NANOSECONDS.toMillis(end - start) + "ms");
>> >>         }
>> >>
>> >>         shutDownContainer();
>> >>     }
>> >>
>> >>     public static class Foo {
>> >>         public void touch() {
>> >>             // no-op
>> >>         }
>> >>     }
>> >>
>> >>     public static class DirectInjection {
>> >>         @Inject
>> >>         private Foo foo;
>> >>
>> >>         public void resolve() {
>> >>             foo.touch();
>> >>         }
>> >>     }
>> >>
>> >>     public static class InstanceInjection {
>> >>         private static final AnnotationLiteral<Default> AL = new
>> >> DefaultLiteral();
>> >>
>> >>         @Inject @Any
>> >>         private Instance<Foo> foo;
>> >>
>> >>         public void resolve() {
>> >>             foo.select(AL).get().touch();
>> >>         }
>> >>     }
>> >> }
>> >>
>> >> and here what looks like the result:
>> >>
>> >> direct: 5ms
>> >> instance: 10191ms
>> >> direct: 0ms
>> >> instance: 10487ms
>> >> direct: 0ms
>> >> instance: 9722ms
>> >> direct: 0ms
>> >> instance: 9171ms
>> >> direct: 0ms
>> >>
>> >> *Romain Manni-Bucau*
>> >> *Twitter: @rmannibucau <https://twitter.com/rmannibucau>*
>> >> *Blog:
>> 
>>>>**http://rmannibucau.wordpress.com/*<http://rmannibucau.wordpress.com/>
>> >> *LinkedIn: **http://fr.linkedin.com/in/rmannibucau*
>> >> *Github: https://github.com/rmannibucau*
>> >>
>> >>
>> >>
>> >> 2013/1/20 Arne Limburg <ar...@openknowledge.de>
>> >>
>> >>> Yes, that's interesting and unexpected.
>> >>>
>> >>> Can you please profile, where the time is lost?
>> >>> In theory, we even could cache the instance when it is a normal
>>scoped
>> >>> proxy.
>> >>>
>> >>> But it is interesting for the overall performance, where the time is
>> >>> lost...
>> >>>
>> >>> Cheers,
>> >>> Arne
>> >>>
>> >>>
>> >>> Am 20.01.13 19:43 schrieb "Romain Manni-Bucau" unter
>> >>> <rm...@gmail.com>:
>> >>>
>> >>> >Well it doesnt seem to be as efficient as expected. Replacing a
>>plain
>> >>>old
>> >>> >injection by an instance i expected no more than x3 in execution
>>time
>> >>>but
>> >>> >was easily x10...and i had less than 5 beans.
>> >>> >
>> >>> >Not sure what takes time...thought to cache the whole result in the
>> >>> >instance but if you have better ideas it is still open for 1.1.8
>>and
>> >>> 1.2.0
>> >>> >:)
>> >>> >Le 20 janv. 2013 19:24, "Arne Limburg"
>> >>><ar...@openknowledge.de> a
>> >>> >écrit :
>> >>> >
>> >>> >> Hi Romain,
>> >>> >>
>> >>> >> It is cached in the InjectionResolver, that should suffice.
>> >>> >>
>> >>> >>
>> >>> >> Cheers,
>> >>> >> Arne
>> >>> >>
>> >>> >> Am 19.01.13 17:44 schrieb "Romain Manni-Bucau" unter
>> >>> >> <rm...@gmail.com>:
>> >>> >>
>> >>> >> >Hi guys,
>> >>> >> >
>> >>> >> >is there any reason
>> >>> >> >why
>> >>>org.apache.webbeans.inject.instance.InstanceImpl#resolveBeans() is
>> >>> >>not
>> >>> >> >cached in the instance of InstanceImpl (whatever the cache is -
>>a
>> >>> >>volatile
>> >>> >> >var, an atomicref or even a synchronized block)?
>> >>> >> >
>> >>> >> >typically doing this pattern: if (!instance.isAmbiguous()) {
>>return
>> >>> >> >instance.get(); } you'll call it twice for nothing (and it can
>>be
>> >>>O(n)
>> >>> >> >with
>> >>> >> >n the numbe rof beans)...or just regarding runtime it seems
>>weird
>> >>>no?
>> >>> >> >
>> >>> >> >wdyt?
>> >>> >> >
>> >>> >> >*Romain Manni-Bucau*
>> >>> >> >*Twitter: @rmannibucau <https://twitter.com/rmannibucau>*
>> >>> >> >*Blog:
>> >>> >> >**http://rmannibucau.wordpress.com/*<
>> >>> http://rmannibucau.wordpress.com/>
>> >>> >> >*LinkedIn: **http://fr.linkedin.com/in/rmannibucau*
>> >>> >> >*Github: https://github.com/rmannibucau*
>> >>> >>
>> >>> >>
>> >>>
>> >>>
>> >>
>>
>>


Re: InstanceImpl & cache?

Posted by Romain Manni-Bucau <rm...@gmail.com>.
i tried several enhancements in BeancacheKey but none was enough (maybe i
didnt cut the right part)

*Romain Manni-Bucau*
*Twitter: @rmannibucau <https://twitter.com/rmannibucau>*
*Blog: **http://rmannibucau.wordpress.com/*<http://rmannibucau.wordpress.com/>
*LinkedIn: **http://fr.linkedin.com/in/rmannibucau*
*Github: https://github.com/rmannibucau*



2013/1/21 Arne Limburg <ar...@openknowledge.de>

> Both solutions seem not to be perfect.
>
>
> What about improving the BeanCacheKey performance?
>
> The creation of the bean cache key uses many computations, what about
> simply removing them, i.e. if we remove the sorting of the qualifiers, it
> would potentially end up in more cache entries for the same set of
> annotations, but how often does this occur in practice, event the copying
> of the array could be removed.
> Maybe also the path to the bda should be replaced by the injection point
> class. This would blow up the cache, but maybe the performance improvement
> is it worth?
>
> Cheers,
> Arne
>
> Am 21.01.13 10:51 schrieb "Romain Manni-Bucau" unter
> <rm...@gmail.com>:
>
> >well didn't manage to find a lot of optim. The only relevant one i found
> >consists in 2 steps:
> >
> >1) cache the BeanCacheKey used for the instance
> >2) cache results of select
> >
> >here a proto (patch for trunk): https://gist.github.com/4584921
> >
> >it solves pretty well the isAmbiguous() get() pattern
> >
> >here the perf:
> >
> >direct: 21ms
> >instance: 3257ms
> >direct: 0ms
> >instance: 2670ms
> >direct: 0ms
> >instance: 2790ms
> >direct: 0ms
> >instance: 2540ms
> >direct: 0ms
> >instance: 2717ms
> >direct: 0ms
> >instance: 2790ms
> >
> >do you think it's worth to use it or sthg close?
> >
> >*Romain Manni-Bucau*
> >*Twitter: @rmannibucau <https://twitter.com/rmannibucau>*
> >*Blog:
> >**http://rmannibucau.wordpress.com/*<http://rmannibucau.wordpress.com/>
> >*LinkedIn: **http://fr.linkedin.com/in/rmannibucau*
> >*Github: https://github.com/rmannibucau*
> >
> >
> >
> >2013/1/21 Romain Manni-Bucau <rm...@gmail.com>
> >
> >> you are right the cache is well done...but we use it.
> >>
> >> let me explain: if i use a direct injection i have the instance itself
> >>and
> >> all is fine. If not we go through the resolution and as you said we use
> >>the
> >> cache.
> >>
> >> To use the cache we use a key (since that's a map) and the key is a
> >> BeanCacheKey: that's what takes time: mainly its construction
> >>
> >> I'll have a look if it can be enhanced a bit
> >>
> >> FYI here is my test:
> >>
> >> public class TmpInstanceImpl extends AbstractUnitTest {
> >>     @Test
> >>     public void perf()
> >>     {
> >>
> >>         final Collection<Class<?>> classes = new ArrayList<Class<?>>();
> >>         classes.add(Foo.class);
> >>         classes.add(DirectInjection.class);
> >>         classes.add(InstanceInjection.class);
> >>
> >>         startContainer(classes);
> >>
> >>         final Bean<?> dir =
> >>
> >>getBeanManager().resolve(getBeanManager().getBeans(DirectInjection.class)
> >>);
> >>         final DirectInjection di = (DirectInjection)
> >> getBeanManager().getReference(dir, DirectInjection.class,
> >> getBeanManager().createCreationalContext(dir));
> >>         Assert.assertNotNull(di);
> >>
> >>         final Bean<?> inst =
> >>
> >>getBeanManager().resolve(getBeanManager().getBeans(InstanceInjection.clas
> >>s));
> >>         final InstanceInjection ii = (InstanceInjection)
> >> getBeanManager().getReference(inst, InstanceInjection.class,
> >> getBeanManager().createCreationalContext(inst));
> >>         Assert.assertNotNull(ii);
> >>
> >>         long start, end;
> >>
> >>         for (int i = 0; i < 10; i++) {
> >>             start = System.nanoTime();
> >>             for (int k = 0; k < 1000000; k++) {
> >>                 di.resolve();
> >>             }
> >>             end = System.nanoTime();
> >>             System.out.println("direct: " +
> >> TimeUnit.NANOSECONDS.toMillis(end - start) + "ms");
> >>
> >>             start = System.nanoTime();
> >>             for (int k = 0; k < 1000000; k++) {
> >>                 ii.resolve();
> >>             }
> >>             end = System.nanoTime();
> >>             System.out.println("instance: " +
> >> TimeUnit.NANOSECONDS.toMillis(end - start) + "ms");
> >>         }
> >>
> >>         shutDownContainer();
> >>     }
> >>
> >>     public static class Foo {
> >>         public void touch() {
> >>             // no-op
> >>         }
> >>     }
> >>
> >>     public static class DirectInjection {
> >>         @Inject
> >>         private Foo foo;
> >>
> >>         public void resolve() {
> >>             foo.touch();
> >>         }
> >>     }
> >>
> >>     public static class InstanceInjection {
> >>         private static final AnnotationLiteral<Default> AL = new
> >> DefaultLiteral();
> >>
> >>         @Inject @Any
> >>         private Instance<Foo> foo;
> >>
> >>         public void resolve() {
> >>             foo.select(AL).get().touch();
> >>         }
> >>     }
> >> }
> >>
> >> and here what looks like the result:
> >>
> >> direct: 5ms
> >> instance: 10191ms
> >> direct: 0ms
> >> instance: 10487ms
> >> direct: 0ms
> >> instance: 9722ms
> >> direct: 0ms
> >> instance: 9171ms
> >> direct: 0ms
> >>
> >> *Romain Manni-Bucau*
> >> *Twitter: @rmannibucau <https://twitter.com/rmannibucau>*
> >> *Blog:
> >>**http://rmannibucau.wordpress.com/*<http://rmannibucau.wordpress.com/>
> >> *LinkedIn: **http://fr.linkedin.com/in/rmannibucau*
> >> *Github: https://github.com/rmannibucau*
> >>
> >>
> >>
> >> 2013/1/20 Arne Limburg <ar...@openknowledge.de>
> >>
> >>> Yes, that's interesting and unexpected.
> >>>
> >>> Can you please profile, where the time is lost?
> >>> In theory, we even could cache the instance when it is a normal scoped
> >>> proxy.
> >>>
> >>> But it is interesting for the overall performance, where the time is
> >>> lost...
> >>>
> >>> Cheers,
> >>> Arne
> >>>
> >>>
> >>> Am 20.01.13 19:43 schrieb "Romain Manni-Bucau" unter
> >>> <rm...@gmail.com>:
> >>>
> >>> >Well it doesnt seem to be as efficient as expected. Replacing a plain
> >>>old
> >>> >injection by an instance i expected no more than x3 in execution time
> >>>but
> >>> >was easily x10...and i had less than 5 beans.
> >>> >
> >>> >Not sure what takes time...thought to cache the whole result in the
> >>> >instance but if you have better ideas it is still open for 1.1.8 and
> >>> 1.2.0
> >>> >:)
> >>> >Le 20 janv. 2013 19:24, "Arne Limburg"
> >>><ar...@openknowledge.de> a
> >>> >écrit :
> >>> >
> >>> >> Hi Romain,
> >>> >>
> >>> >> It is cached in the InjectionResolver, that should suffice.
> >>> >>
> >>> >>
> >>> >> Cheers,
> >>> >> Arne
> >>> >>
> >>> >> Am 19.01.13 17:44 schrieb "Romain Manni-Bucau" unter
> >>> >> <rm...@gmail.com>:
> >>> >>
> >>> >> >Hi guys,
> >>> >> >
> >>> >> >is there any reason
> >>> >> >why
> >>>org.apache.webbeans.inject.instance.InstanceImpl#resolveBeans() is
> >>> >>not
> >>> >> >cached in the instance of InstanceImpl (whatever the cache is - a
> >>> >>volatile
> >>> >> >var, an atomicref or even a synchronized block)?
> >>> >> >
> >>> >> >typically doing this pattern: if (!instance.isAmbiguous()) { return
> >>> >> >instance.get(); } you'll call it twice for nothing (and it can be
> >>>O(n)
> >>> >> >with
> >>> >> >n the numbe rof beans)...or just regarding runtime it seems weird
> >>>no?
> >>> >> >
> >>> >> >wdyt?
> >>> >> >
> >>> >> >*Romain Manni-Bucau*
> >>> >> >*Twitter: @rmannibucau <https://twitter.com/rmannibucau>*
> >>> >> >*Blog:
> >>> >> >**http://rmannibucau.wordpress.com/*<
> >>> http://rmannibucau.wordpress.com/>
> >>> >> >*LinkedIn: **http://fr.linkedin.com/in/rmannibucau*
> >>> >> >*Github: https://github.com/rmannibucau*
> >>> >>
> >>> >>
> >>>
> >>>
> >>
>
>

Re: InstanceImpl & cache?

Posted by Arne Limburg <ar...@openknowledge.de>.
Both solutions seem not to be perfect.


What about improving the BeanCacheKey performance?

The creation of the bean cache key uses many computations, what about
simply removing them, i.e. if we remove the sorting of the qualifiers, it
would potentially end up in more cache entries for the same set of
annotations, but how often does this occur in practice, event the copying
of the array could be removed.
Maybe also the path to the bda should be replaced by the injection point
class. This would blow up the cache, but maybe the performance improvement
is it worth?

Cheers,
Arne

Am 21.01.13 10:51 schrieb "Romain Manni-Bucau" unter
<rm...@gmail.com>:

>well didn't manage to find a lot of optim. The only relevant one i found
>consists in 2 steps:
>
>1) cache the BeanCacheKey used for the instance
>2) cache results of select
>
>here a proto (patch for trunk): https://gist.github.com/4584921
>
>it solves pretty well the isAmbiguous() get() pattern
>
>here the perf:
>
>direct: 21ms
>instance: 3257ms
>direct: 0ms
>instance: 2670ms
>direct: 0ms
>instance: 2790ms
>direct: 0ms
>instance: 2540ms
>direct: 0ms
>instance: 2717ms
>direct: 0ms
>instance: 2790ms
>
>do you think it's worth to use it or sthg close?
>
>*Romain Manni-Bucau*
>*Twitter: @rmannibucau <https://twitter.com/rmannibucau>*
>*Blog: 
>**http://rmannibucau.wordpress.com/*<http://rmannibucau.wordpress.com/>
>*LinkedIn: **http://fr.linkedin.com/in/rmannibucau*
>*Github: https://github.com/rmannibucau*
>
>
>
>2013/1/21 Romain Manni-Bucau <rm...@gmail.com>
>
>> you are right the cache is well done...but we use it.
>>
>> let me explain: if i use a direct injection i have the instance itself
>>and
>> all is fine. If not we go through the resolution and as you said we use
>>the
>> cache.
>>
>> To use the cache we use a key (since that's a map) and the key is a
>> BeanCacheKey: that's what takes time: mainly its construction
>>
>> I'll have a look if it can be enhanced a bit
>>
>> FYI here is my test:
>>
>> public class TmpInstanceImpl extends AbstractUnitTest {
>>     @Test
>>     public void perf()
>>     {
>>
>>         final Collection<Class<?>> classes = new ArrayList<Class<?>>();
>>         classes.add(Foo.class);
>>         classes.add(DirectInjection.class);
>>         classes.add(InstanceInjection.class);
>>
>>         startContainer(classes);
>>
>>         final Bean<?> dir =
>> 
>>getBeanManager().resolve(getBeanManager().getBeans(DirectInjection.class)
>>);
>>         final DirectInjection di = (DirectInjection)
>> getBeanManager().getReference(dir, DirectInjection.class,
>> getBeanManager().createCreationalContext(dir));
>>         Assert.assertNotNull(di);
>>
>>         final Bean<?> inst =
>> 
>>getBeanManager().resolve(getBeanManager().getBeans(InstanceInjection.clas
>>s));
>>         final InstanceInjection ii = (InstanceInjection)
>> getBeanManager().getReference(inst, InstanceInjection.class,
>> getBeanManager().createCreationalContext(inst));
>>         Assert.assertNotNull(ii);
>>
>>         long start, end;
>>
>>         for (int i = 0; i < 10; i++) {
>>             start = System.nanoTime();
>>             for (int k = 0; k < 1000000; k++) {
>>                 di.resolve();
>>             }
>>             end = System.nanoTime();
>>             System.out.println("direct: " +
>> TimeUnit.NANOSECONDS.toMillis(end - start) + "ms");
>>
>>             start = System.nanoTime();
>>             for (int k = 0; k < 1000000; k++) {
>>                 ii.resolve();
>>             }
>>             end = System.nanoTime();
>>             System.out.println("instance: " +
>> TimeUnit.NANOSECONDS.toMillis(end - start) + "ms");
>>         }
>>
>>         shutDownContainer();
>>     }
>>
>>     public static class Foo {
>>         public void touch() {
>>             // no-op
>>         }
>>     }
>>
>>     public static class DirectInjection {
>>         @Inject
>>         private Foo foo;
>>
>>         public void resolve() {
>>             foo.touch();
>>         }
>>     }
>>
>>     public static class InstanceInjection {
>>         private static final AnnotationLiteral<Default> AL = new
>> DefaultLiteral();
>>
>>         @Inject @Any
>>         private Instance<Foo> foo;
>>
>>         public void resolve() {
>>             foo.select(AL).get().touch();
>>         }
>>     }
>> }
>>
>> and here what looks like the result:
>>
>> direct: 5ms
>> instance: 10191ms
>> direct: 0ms
>> instance: 10487ms
>> direct: 0ms
>> instance: 9722ms
>> direct: 0ms
>> instance: 9171ms
>> direct: 0ms
>>
>> *Romain Manni-Bucau*
>> *Twitter: @rmannibucau <https://twitter.com/rmannibucau>*
>> *Blog: 
>>**http://rmannibucau.wordpress.com/*<http://rmannibucau.wordpress.com/>
>> *LinkedIn: **http://fr.linkedin.com/in/rmannibucau*
>> *Github: https://github.com/rmannibucau*
>>
>>
>>
>> 2013/1/20 Arne Limburg <ar...@openknowledge.de>
>>
>>> Yes, that's interesting and unexpected.
>>>
>>> Can you please profile, where the time is lost?
>>> In theory, we even could cache the instance when it is a normal scoped
>>> proxy.
>>>
>>> But it is interesting for the overall performance, where the time is
>>> lost...
>>>
>>> Cheers,
>>> Arne
>>>
>>>
>>> Am 20.01.13 19:43 schrieb "Romain Manni-Bucau" unter
>>> <rm...@gmail.com>:
>>>
>>> >Well it doesnt seem to be as efficient as expected. Replacing a plain
>>>old
>>> >injection by an instance i expected no more than x3 in execution time
>>>but
>>> >was easily x10...and i had less than 5 beans.
>>> >
>>> >Not sure what takes time...thought to cache the whole result in the
>>> >instance but if you have better ideas it is still open for 1.1.8 and
>>> 1.2.0
>>> >:)
>>> >Le 20 janv. 2013 19:24, "Arne Limburg"
>>><ar...@openknowledge.de> a
>>> >écrit :
>>> >
>>> >> Hi Romain,
>>> >>
>>> >> It is cached in the InjectionResolver, that should suffice.
>>> >>
>>> >>
>>> >> Cheers,
>>> >> Arne
>>> >>
>>> >> Am 19.01.13 17:44 schrieb "Romain Manni-Bucau" unter
>>> >> <rm...@gmail.com>:
>>> >>
>>> >> >Hi guys,
>>> >> >
>>> >> >is there any reason
>>> >> >why 
>>>org.apache.webbeans.inject.instance.InstanceImpl#resolveBeans() is
>>> >>not
>>> >> >cached in the instance of InstanceImpl (whatever the cache is - a
>>> >>volatile
>>> >> >var, an atomicref or even a synchronized block)?
>>> >> >
>>> >> >typically doing this pattern: if (!instance.isAmbiguous()) { return
>>> >> >instance.get(); } you'll call it twice for nothing (and it can be
>>>O(n)
>>> >> >with
>>> >> >n the numbe rof beans)...or just regarding runtime it seems weird
>>>no?
>>> >> >
>>> >> >wdyt?
>>> >> >
>>> >> >*Romain Manni-Bucau*
>>> >> >*Twitter: @rmannibucau <https://twitter.com/rmannibucau>*
>>> >> >*Blog:
>>> >> >**http://rmannibucau.wordpress.com/*<
>>> http://rmannibucau.wordpress.com/>
>>> >> >*LinkedIn: **http://fr.linkedin.com/in/rmannibucau*
>>> >> >*Github: https://github.com/rmannibucau*
>>> >>
>>> >>
>>>
>>>
>>


Re: InstanceImpl & cache?

Posted by Romain Manni-Bucau <rm...@gmail.com>.
well didn't manage to find a lot of optim. The only relevant one i found
consists in 2 steps:

1) cache the BeanCacheKey used for the instance
2) cache results of select

here a proto (patch for trunk): https://gist.github.com/4584921

it solves pretty well the isAmbiguous() get() pattern

here the perf:

direct: 21ms
instance: 3257ms
direct: 0ms
instance: 2670ms
direct: 0ms
instance: 2790ms
direct: 0ms
instance: 2540ms
direct: 0ms
instance: 2717ms
direct: 0ms
instance: 2790ms

do you think it's worth to use it or sthg close?

*Romain Manni-Bucau*
*Twitter: @rmannibucau <https://twitter.com/rmannibucau>*
*Blog: **http://rmannibucau.wordpress.com/*<http://rmannibucau.wordpress.com/>
*LinkedIn: **http://fr.linkedin.com/in/rmannibucau*
*Github: https://github.com/rmannibucau*



2013/1/21 Romain Manni-Bucau <rm...@gmail.com>

> you are right the cache is well done...but we use it.
>
> let me explain: if i use a direct injection i have the instance itself and
> all is fine. If not we go through the resolution and as you said we use the
> cache.
>
> To use the cache we use a key (since that's a map) and the key is a
> BeanCacheKey: that's what takes time: mainly its construction
>
> I'll have a look if it can be enhanced a bit
>
> FYI here is my test:
>
> public class TmpInstanceImpl extends AbstractUnitTest {
>     @Test
>     public void perf()
>     {
>
>         final Collection<Class<?>> classes = new ArrayList<Class<?>>();
>         classes.add(Foo.class);
>         classes.add(DirectInjection.class);
>         classes.add(InstanceInjection.class);
>
>         startContainer(classes);
>
>         final Bean<?> dir =
> getBeanManager().resolve(getBeanManager().getBeans(DirectInjection.class));
>         final DirectInjection di = (DirectInjection)
> getBeanManager().getReference(dir, DirectInjection.class,
> getBeanManager().createCreationalContext(dir));
>         Assert.assertNotNull(di);
>
>         final Bean<?> inst =
> getBeanManager().resolve(getBeanManager().getBeans(InstanceInjection.class));
>         final InstanceInjection ii = (InstanceInjection)
> getBeanManager().getReference(inst, InstanceInjection.class,
> getBeanManager().createCreationalContext(inst));
>         Assert.assertNotNull(ii);
>
>         long start, end;
>
>         for (int i = 0; i < 10; i++) {
>             start = System.nanoTime();
>             for (int k = 0; k < 1000000; k++) {
>                 di.resolve();
>             }
>             end = System.nanoTime();
>             System.out.println("direct: " +
> TimeUnit.NANOSECONDS.toMillis(end - start) + "ms");
>
>             start = System.nanoTime();
>             for (int k = 0; k < 1000000; k++) {
>                 ii.resolve();
>             }
>             end = System.nanoTime();
>             System.out.println("instance: " +
> TimeUnit.NANOSECONDS.toMillis(end - start) + "ms");
>         }
>
>         shutDownContainer();
>     }
>
>     public static class Foo {
>         public void touch() {
>             // no-op
>         }
>     }
>
>     public static class DirectInjection {
>         @Inject
>         private Foo foo;
>
>         public void resolve() {
>             foo.touch();
>         }
>     }
>
>     public static class InstanceInjection {
>         private static final AnnotationLiteral<Default> AL = new
> DefaultLiteral();
>
>         @Inject @Any
>         private Instance<Foo> foo;
>
>         public void resolve() {
>             foo.select(AL).get().touch();
>         }
>     }
> }
>
> and here what looks like the result:
>
> direct: 5ms
> instance: 10191ms
> direct: 0ms
> instance: 10487ms
> direct: 0ms
> instance: 9722ms
> direct: 0ms
> instance: 9171ms
> direct: 0ms
>
> *Romain Manni-Bucau*
> *Twitter: @rmannibucau <https://twitter.com/rmannibucau>*
> *Blog: **http://rmannibucau.wordpress.com/*<http://rmannibucau.wordpress.com/>
> *LinkedIn: **http://fr.linkedin.com/in/rmannibucau*
> *Github: https://github.com/rmannibucau*
>
>
>
> 2013/1/20 Arne Limburg <ar...@openknowledge.de>
>
>> Yes, that's interesting and unexpected.
>>
>> Can you please profile, where the time is lost?
>> In theory, we even could cache the instance when it is a normal scoped
>> proxy.
>>
>> But it is interesting for the overall performance, where the time is
>> lost...
>>
>> Cheers,
>> Arne
>>
>>
>> Am 20.01.13 19:43 schrieb "Romain Manni-Bucau" unter
>> <rm...@gmail.com>:
>>
>> >Well it doesnt seem to be as efficient as expected. Replacing a plain old
>> >injection by an instance i expected no more than x3 in execution time but
>> >was easily x10...and i had less than 5 beans.
>> >
>> >Not sure what takes time...thought to cache the whole result in the
>> >instance but if you have better ideas it is still open for 1.1.8 and
>> 1.2.0
>> >:)
>> >Le 20 janv. 2013 19:24, "Arne Limburg" <ar...@openknowledge.de> a
>> >écrit :
>> >
>> >> Hi Romain,
>> >>
>> >> It is cached in the InjectionResolver, that should suffice.
>> >>
>> >>
>> >> Cheers,
>> >> Arne
>> >>
>> >> Am 19.01.13 17:44 schrieb "Romain Manni-Bucau" unter
>> >> <rm...@gmail.com>:
>> >>
>> >> >Hi guys,
>> >> >
>> >> >is there any reason
>> >> >why org.apache.webbeans.inject.instance.InstanceImpl#resolveBeans() is
>> >>not
>> >> >cached in the instance of InstanceImpl (whatever the cache is - a
>> >>volatile
>> >> >var, an atomicref or even a synchronized block)?
>> >> >
>> >> >typically doing this pattern: if (!instance.isAmbiguous()) { return
>> >> >instance.get(); } you'll call it twice for nothing (and it can be O(n)
>> >> >with
>> >> >n the numbe rof beans)...or just regarding runtime it seems weird no?
>> >> >
>> >> >wdyt?
>> >> >
>> >> >*Romain Manni-Bucau*
>> >> >*Twitter: @rmannibucau <https://twitter.com/rmannibucau>*
>> >> >*Blog:
>> >> >**http://rmannibucau.wordpress.com/*<
>> http://rmannibucau.wordpress.com/>
>> >> >*LinkedIn: **http://fr.linkedin.com/in/rmannibucau*
>> >> >*Github: https://github.com/rmannibucau*
>> >>
>> >>
>>
>>
>

Re: InstanceImpl & cache?

Posted by Romain Manni-Bucau <rm...@gmail.com>.
you are right the cache is well done...but we use it.

let me explain: if i use a direct injection i have the instance itself and
all is fine. If not we go through the resolution and as you said we use the
cache.

To use the cache we use a key (since that's a map) and the key is a
BeanCacheKey: that's what takes time: mainly its construction

I'll have a look if it can be enhanced a bit

FYI here is my test:

public class TmpInstanceImpl extends AbstractUnitTest {
    @Test
    public void perf()
    {

        final Collection<Class<?>> classes = new ArrayList<Class<?>>();
        classes.add(Foo.class);
        classes.add(DirectInjection.class);
        classes.add(InstanceInjection.class);

        startContainer(classes);

        final Bean<?> dir =
getBeanManager().resolve(getBeanManager().getBeans(DirectInjection.class));
        final DirectInjection di = (DirectInjection)
getBeanManager().getReference(dir, DirectInjection.class,
getBeanManager().createCreationalContext(dir));
        Assert.assertNotNull(di);

        final Bean<?> inst =
getBeanManager().resolve(getBeanManager().getBeans(InstanceInjection.class));
        final InstanceInjection ii = (InstanceInjection)
getBeanManager().getReference(inst, InstanceInjection.class,
getBeanManager().createCreationalContext(inst));
        Assert.assertNotNull(ii);

        long start, end;

        for (int i = 0; i < 10; i++) {
            start = System.nanoTime();
            for (int k = 0; k < 1000000; k++) {
                di.resolve();
            }
            end = System.nanoTime();
            System.out.println("direct: " +
TimeUnit.NANOSECONDS.toMillis(end - start) + "ms");

            start = System.nanoTime();
            for (int k = 0; k < 1000000; k++) {
                ii.resolve();
            }
            end = System.nanoTime();
            System.out.println("instance: " +
TimeUnit.NANOSECONDS.toMillis(end - start) + "ms");
        }

        shutDownContainer();
    }

    public static class Foo {
        public void touch() {
            // no-op
        }
    }

    public static class DirectInjection {
        @Inject
        private Foo foo;

        public void resolve() {
            foo.touch();
        }
    }

    public static class InstanceInjection {
        private static final AnnotationLiteral<Default> AL = new
DefaultLiteral();

        @Inject @Any
        private Instance<Foo> foo;

        public void resolve() {
            foo.select(AL).get().touch();
        }
    }
}

and here what looks like the result:

direct: 5ms
instance: 10191ms
direct: 0ms
instance: 10487ms
direct: 0ms
instance: 9722ms
direct: 0ms
instance: 9171ms
direct: 0ms

*Romain Manni-Bucau*
*Twitter: @rmannibucau <https://twitter.com/rmannibucau>*
*Blog: **http://rmannibucau.wordpress.com/*<http://rmannibucau.wordpress.com/>
*LinkedIn: **http://fr.linkedin.com/in/rmannibucau*
*Github: https://github.com/rmannibucau*



2013/1/20 Arne Limburg <ar...@openknowledge.de>

> Yes, that's interesting and unexpected.
>
> Can you please profile, where the time is lost?
> In theory, we even could cache the instance when it is a normal scoped
> proxy.
>
> But it is interesting for the overall performance, where the time is
> lost...
>
> Cheers,
> Arne
>
>
> Am 20.01.13 19:43 schrieb "Romain Manni-Bucau" unter
> <rm...@gmail.com>:
>
> >Well it doesnt seem to be as efficient as expected. Replacing a plain old
> >injection by an instance i expected no more than x3 in execution time but
> >was easily x10...and i had less than 5 beans.
> >
> >Not sure what takes time...thought to cache the whole result in the
> >instance but if you have better ideas it is still open for 1.1.8 and 1.2.0
> >:)
> >Le 20 janv. 2013 19:24, "Arne Limburg" <ar...@openknowledge.de> a
> >écrit :
> >
> >> Hi Romain,
> >>
> >> It is cached in the InjectionResolver, that should suffice.
> >>
> >>
> >> Cheers,
> >> Arne
> >>
> >> Am 19.01.13 17:44 schrieb "Romain Manni-Bucau" unter
> >> <rm...@gmail.com>:
> >>
> >> >Hi guys,
> >> >
> >> >is there any reason
> >> >why org.apache.webbeans.inject.instance.InstanceImpl#resolveBeans() is
> >>not
> >> >cached in the instance of InstanceImpl (whatever the cache is - a
> >>volatile
> >> >var, an atomicref or even a synchronized block)?
> >> >
> >> >typically doing this pattern: if (!instance.isAmbiguous()) { return
> >> >instance.get(); } you'll call it twice for nothing (and it can be O(n)
> >> >with
> >> >n the numbe rof beans)...or just regarding runtime it seems weird no?
> >> >
> >> >wdyt?
> >> >
> >> >*Romain Manni-Bucau*
> >> >*Twitter: @rmannibucau <https://twitter.com/rmannibucau>*
> >> >*Blog:
> >> >**http://rmannibucau.wordpress.com/*<http://rmannibucau.wordpress.com/
> >
> >> >*LinkedIn: **http://fr.linkedin.com/in/rmannibucau*
> >> >*Github: https://github.com/rmannibucau*
> >>
> >>
>
>

Re: InstanceImpl & cache?

Posted by Arne Limburg <ar...@openknowledge.de>.
Yes, that's interesting and unexpected.

Can you please profile, where the time is lost?
In theory, we even could cache the instance when it is a normal scoped
proxy.

But it is interesting for the overall performance, where the time is
lost...

Cheers,
Arne


Am 20.01.13 19:43 schrieb "Romain Manni-Bucau" unter
<rm...@gmail.com>:

>Well it doesnt seem to be as efficient as expected. Replacing a plain old
>injection by an instance i expected no more than x3 in execution time but
>was easily x10...and i had less than 5 beans.
>
>Not sure what takes time...thought to cache the whole result in the
>instance but if you have better ideas it is still open for 1.1.8 and 1.2.0
>:)
>Le 20 janv. 2013 19:24, "Arne Limburg" <ar...@openknowledge.de> a
>écrit :
>
>> Hi Romain,
>>
>> It is cached in the InjectionResolver, that should suffice.
>>
>>
>> Cheers,
>> Arne
>>
>> Am 19.01.13 17:44 schrieb "Romain Manni-Bucau" unter
>> <rm...@gmail.com>:
>>
>> >Hi guys,
>> >
>> >is there any reason
>> >why org.apache.webbeans.inject.instance.InstanceImpl#resolveBeans() is
>>not
>> >cached in the instance of InstanceImpl (whatever the cache is - a
>>volatile
>> >var, an atomicref or even a synchronized block)?
>> >
>> >typically doing this pattern: if (!instance.isAmbiguous()) { return
>> >instance.get(); } you'll call it twice for nothing (and it can be O(n)
>> >with
>> >n the numbe rof beans)...or just regarding runtime it seems weird no?
>> >
>> >wdyt?
>> >
>> >*Romain Manni-Bucau*
>> >*Twitter: @rmannibucau <https://twitter.com/rmannibucau>*
>> >*Blog:
>> >**http://rmannibucau.wordpress.com/*<http://rmannibucau.wordpress.com/>
>> >*LinkedIn: **http://fr.linkedin.com/in/rmannibucau*
>> >*Github: https://github.com/rmannibucau*
>>
>>


Re: InstanceImpl & cache?

Posted by Romain Manni-Bucau <rm...@gmail.com>.
Well it doesnt seem to be as efficient as expected. Replacing a plain old
injection by an instance i expected no more than x3 in execution time but
was easily x10...and i had less than 5 beans.

Not sure what takes time...thought to cache the whole result in the
instance but if you have better ideas it is still open for 1.1.8 and 1.2.0
:)
Le 20 janv. 2013 19:24, "Arne Limburg" <ar...@openknowledge.de> a
écrit :

> Hi Romain,
>
> It is cached in the InjectionResolver, that should suffice.
>
>
> Cheers,
> Arne
>
> Am 19.01.13 17:44 schrieb "Romain Manni-Bucau" unter
> <rm...@gmail.com>:
>
> >Hi guys,
> >
> >is there any reason
> >why org.apache.webbeans.inject.instance.InstanceImpl#resolveBeans() is not
> >cached in the instance of InstanceImpl (whatever the cache is - a volatile
> >var, an atomicref or even a synchronized block)?
> >
> >typically doing this pattern: if (!instance.isAmbiguous()) { return
> >instance.get(); } you'll call it twice for nothing (and it can be O(n)
> >with
> >n the numbe rof beans)...or just regarding runtime it seems weird no?
> >
> >wdyt?
> >
> >*Romain Manni-Bucau*
> >*Twitter: @rmannibucau <https://twitter.com/rmannibucau>*
> >*Blog:
> >**http://rmannibucau.wordpress.com/*<http://rmannibucau.wordpress.com/>
> >*LinkedIn: **http://fr.linkedin.com/in/rmannibucau*
> >*Github: https://github.com/rmannibucau*
>
>

Re: InstanceImpl & cache?

Posted by Arne Limburg <ar...@openknowledge.de>.
Hi Romain,

It is cached in the InjectionResolver, that should suffice.


Cheers,
Arne

Am 19.01.13 17:44 schrieb "Romain Manni-Bucau" unter
<rm...@gmail.com>:

>Hi guys,
>
>is there any reason
>why org.apache.webbeans.inject.instance.InstanceImpl#resolveBeans() is not
>cached in the instance of InstanceImpl (whatever the cache is - a volatile
>var, an atomicref or even a synchronized block)?
>
>typically doing this pattern: if (!instance.isAmbiguous()) { return
>instance.get(); } you'll call it twice for nothing (and it can be O(n)
>with
>n the numbe rof beans)...or just regarding runtime it seems weird no?
>
>wdyt?
>
>*Romain Manni-Bucau*
>*Twitter: @rmannibucau <https://twitter.com/rmannibucau>*
>*Blog: 
>**http://rmannibucau.wordpress.com/*<http://rmannibucau.wordpress.com/>
>*LinkedIn: **http://fr.linkedin.com/in/rmannibucau*
>*Github: https://github.com/rmannibucau*