You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@geode.apache.org by Avinash Dongre <ad...@apache.org> on 2016/11/16 16:15:16 UTC

Strange Performance Issue with Large number Region Creation

Hi,

I am seeing strange performance issue when I want to create large number of
regions.
Test I am doing with 1500 Regions.

~ 2 minutes for creating first 1130 regions
~7 minutes for creating remaining 370 regions.

Following is example code :

Cache geodeCache = this.createCache();
final String REGION_NAME = "Region_Name";
for ( int i = 0; i < 1500; i++) {
  RegionFactory<Object, Object> rf =
geodeCache.createRegionFactory(RegionShortcut.PARTITION_PERSISTENT_OVERFLOW);
  rf.setDiskSynchronous(true);
  rf.setEvictionAttributes(EvictionAttributes.createLIFOEntryAttributes(1,
EvictionAction.OVERFLOW_TO_DISK));
  Region<Object, Object> region = rf.create(REGION_NAME + "_" + i);
}
geodeCache.close();


If I remove following code from createVMRegion ( GemFireCacheImpl.java ),
then I could create Regions in 2 minutes.

if (!rgn.isInternalRegion()) {
  system.handleResourceEvent(ResourceEvent.REGION_CREATE, rgn);
}

Any help or pointers why this is happening ?


Thanks

Avinash

Re: Strange Performance Issue with Large number Region Creation

Posted by Udo Kohlmeyer <uk...@pivotal.io>.
+1

Maybe we can create a JIRA for this.


On 11/22/16 11:56, Dan Smith wrote:
> Seems like fixing that StatisicsMonitor to use a ConcurrentHashSet is a
> good fix, there's no reason why it should be a list that I can see.
>
> As other folks mentioned, there is significant overhead involved in
> creating that many regions in terms of memory, messaging, and disk
> metadata. Especially considering you are creating partitioned regions,
> because each bucket has some overhead and you are creating some 170000
> buckets if you are using the default number of buckets.
>
> -Dan
>
>
>
> On Mon, Nov 21, 2016 at 12:45 AM, Avinash Dongre <ad...@apache.org> wrote:
>
>> Hi All,
>>
>> Thanks @Mike. Unfortunately I have situation where-in I need to create
>> those many number of regions.
>>
>> I need to understand why in StatisticsMonitor and StatMonitorHandler
>> List is used to store monitors, listeners and statisticIds  , is there any
>> particular reason for this.
>>
>> When I replace the List to com.gemstone.gemfire.internal.concurrent.
>> ConcurrentHashSet
>> I see significant improvement in creating large number of region creation.
>> ( from ~7hrs to ~26 minutes)
>>
>>
>> Thanks
>> Avinash
>>
>>
>> On Thu, Nov 17, 2016 at 3:27 AM, Michael Stolz <ms...@pivotal.io> wrote:
>>
>>> I can't think of any reason why any use case could need 1500 Regions.
>>>
>>> Regions are heavyweight constructs more similar in nature to Unix mounts
>>> than Unix directories.
>>>
>>> We usually use simple naming conventions for keys to simulate directory
>>> structures.
>>>
>>> So I would recommend that you create 1 Region, and store 1500 named hash
>>> maps into it.
>>>
>>> Make sense?
>>>
>>> --
>>> Mike Stolz
>>> Principal Engineer - Gemfire Product Manager
>>> Mobile: 631-835-4771
>>> On Nov 16, 2016 11:15 AM, "Avinash Dongre" <ad...@apache.org> wrote:
>>>
>>>> Hi,
>>>>
>>>> I am seeing strange performance issue when I want to create large
>> number
>>> of
>>>> regions.
>>>> Test I am doing with 1500 Regions.
>>>>
>>>> ~ 2 minutes for creating first 1130 regions
>>>> ~7 minutes for creating remaining 370 regions.
>>>>
>>>> Following is example code :
>>>>
>>>> Cache geodeCache = this.createCache();
>>>> final String REGION_NAME = "Region_Name";
>>>> for ( int i = 0; i < 1500; i++) {
>>>>    RegionFactory<Object, Object> rf =
>>>> geodeCache.createRegionFactory(RegionShortcut.PARTITION_
>>>> PERSISTENT_OVERFLOW);
>>>>    rf.setDiskSynchronous(true);
>>>>    rf.setEvictionAttributes(EvictionAttributes.
>>> createLIFOEntryAttributes(1,
>>>> EvictionAction.OVERFLOW_TO_DISK));
>>>>    Region<Object, Object> region = rf.create(REGION_NAME + "_" + i);
>>>> }
>>>> geodeCache.close();
>>>>
>>>>
>>>> If I remove following code from createVMRegion ( GemFireCacheImpl.java
>> ),
>>>> then I could create Regions in 2 minutes.
>>>>
>>>> if (!rgn.isInternalRegion()) {
>>>>    system.handleResourceEvent(ResourceEvent.REGION_CREATE, rgn);
>>>> }
>>>>
>>>> Any help or pointers why this is happening ?
>>>>
>>>>
>>>> Thanks
>>>>
>>>> Avinash
>>>>


Re: Strange Performance Issue with Large number Region Creation

Posted by Dan Smith <ds...@pivotal.io>.
Seems like fixing that StatisicsMonitor to use a ConcurrentHashSet is a
good fix, there's no reason why it should be a list that I can see.

As other folks mentioned, there is significant overhead involved in
creating that many regions in terms of memory, messaging, and disk
metadata. Especially considering you are creating partitioned regions,
because each bucket has some overhead and you are creating some 170000
buckets if you are using the default number of buckets.

-Dan



On Mon, Nov 21, 2016 at 12:45 AM, Avinash Dongre <ad...@apache.org> wrote:

> Hi All,
>
> Thanks @Mike. Unfortunately I have situation where-in I need to create
> those many number of regions.
>
> I need to understand why in StatisticsMonitor and StatMonitorHandler
> List is used to store monitors, listeners and statisticIds  , is there any
> particular reason for this.
>
> When I replace the List to com.gemstone.gemfire.internal.concurrent.
> ConcurrentHashSet
> I see significant improvement in creating large number of region creation.
> ( from ~7hrs to ~26 minutes)
>
>
> Thanks
> Avinash
>
>
> On Thu, Nov 17, 2016 at 3:27 AM, Michael Stolz <ms...@pivotal.io> wrote:
>
> > I can't think of any reason why any use case could need 1500 Regions.
> >
> > Regions are heavyweight constructs more similar in nature to Unix mounts
> > than Unix directories.
> >
> > We usually use simple naming conventions for keys to simulate directory
> > structures.
> >
> > So I would recommend that you create 1 Region, and store 1500 named hash
> > maps into it.
> >
> > Make sense?
> >
> > --
> > Mike Stolz
> > Principal Engineer - Gemfire Product Manager
> > Mobile: 631-835-4771
> > On Nov 16, 2016 11:15 AM, "Avinash Dongre" <ad...@apache.org> wrote:
> >
> > > Hi,
> > >
> > > I am seeing strange performance issue when I want to create large
> number
> > of
> > > regions.
> > > Test I am doing with 1500 Regions.
> > >
> > > ~ 2 minutes for creating first 1130 regions
> > > ~7 minutes for creating remaining 370 regions.
> > >
> > > Following is example code :
> > >
> > > Cache geodeCache = this.createCache();
> > > final String REGION_NAME = "Region_Name";
> > > for ( int i = 0; i < 1500; i++) {
> > >   RegionFactory<Object, Object> rf =
> > > geodeCache.createRegionFactory(RegionShortcut.PARTITION_
> > > PERSISTENT_OVERFLOW);
> > >   rf.setDiskSynchronous(true);
> > >   rf.setEvictionAttributes(EvictionAttributes.
> > createLIFOEntryAttributes(1,
> > > EvictionAction.OVERFLOW_TO_DISK));
> > >   Region<Object, Object> region = rf.create(REGION_NAME + "_" + i);
> > > }
> > > geodeCache.close();
> > >
> > >
> > > If I remove following code from createVMRegion ( GemFireCacheImpl.java
> ),
> > > then I could create Regions in 2 minutes.
> > >
> > > if (!rgn.isInternalRegion()) {
> > >   system.handleResourceEvent(ResourceEvent.REGION_CREATE, rgn);
> > > }
> > >
> > > Any help or pointers why this is happening ?
> > >
> > >
> > > Thanks
> > >
> > > Avinash
> > >
> >
>

Re: Strange Performance Issue with Large number Region Creation

Posted by Udo Kohlmeyer <uk...@pivotal.io>.
@Avinesh,

I's be interested in understanding why you need to create 1500 regions.

Maybe if you could explain the usecase where 1500 regions are required 
we could potentially help with another solution.

This reminds me of the Oracle Table limitation of 999 columns.... Where 
the Oracle support engineer's first question was, what are you storing 
that you need more than 999 columns in 1 table.

--Udo


On 11/21/16 00:45, Avinash Dongre wrote:
> Hi All,
>
> Thanks @Mike. Unfortunately I have situation where-in I need to create
> those many number of regions.
>
> I need to understand why in StatisticsMonitor and StatMonitorHandler
> List is used to store monitors, listeners and statisticIds  , is there any
> particular reason for this.
>
> When I replace the List to com.gemstone.gemfire.internal.concurrent.
> ConcurrentHashSet
> I see significant improvement in creating large number of region creation.
> ( from ~7hrs to ~26 minutes)
>
>
> Thanks
> Avinash
>
>
> On Thu, Nov 17, 2016 at 3:27 AM, Michael Stolz <ms...@pivotal.io> wrote:
>
>> I can't think of any reason why any use case could need 1500 Regions.
>>
>> Regions are heavyweight constructs more similar in nature to Unix mounts
>> than Unix directories.
>>
>> We usually use simple naming conventions for keys to simulate directory
>> structures.
>>
>> So I would recommend that you create 1 Region, and store 1500 named hash
>> maps into it.
>>
>> Make sense?
>>
>> --
>> Mike Stolz
>> Principal Engineer - Gemfire Product Manager
>> Mobile: 631-835-4771
>> On Nov 16, 2016 11:15 AM, "Avinash Dongre" <ad...@apache.org> wrote:
>>
>>> Hi,
>>>
>>> I am seeing strange performance issue when I want to create large number
>> of
>>> regions.
>>> Test I am doing with 1500 Regions.
>>>
>>> ~ 2 minutes for creating first 1130 regions
>>> ~7 minutes for creating remaining 370 regions.
>>>
>>> Following is example code :
>>>
>>> Cache geodeCache = this.createCache();
>>> final String REGION_NAME = "Region_Name";
>>> for ( int i = 0; i < 1500; i++) {
>>>    RegionFactory<Object, Object> rf =
>>> geodeCache.createRegionFactory(RegionShortcut.PARTITION_
>>> PERSISTENT_OVERFLOW);
>>>    rf.setDiskSynchronous(true);
>>>    rf.setEvictionAttributes(EvictionAttributes.
>> createLIFOEntryAttributes(1,
>>> EvictionAction.OVERFLOW_TO_DISK));
>>>    Region<Object, Object> region = rf.create(REGION_NAME + "_" + i);
>>> }
>>> geodeCache.close();
>>>
>>>
>>> If I remove following code from createVMRegion ( GemFireCacheImpl.java ),
>>> then I could create Regions in 2 minutes.
>>>
>>> if (!rgn.isInternalRegion()) {
>>>    system.handleResourceEvent(ResourceEvent.REGION_CREATE, rgn);
>>> }
>>>
>>> Any help or pointers why this is happening ?
>>>
>>>
>>> Thanks
>>>
>>> Avinash
>>>


Re: Strange Performance Issue with Large number Region Creation

Posted by Avinash Dongre <ad...@apache.org>.
Hi All,

Thanks @Mike. Unfortunately I have situation where-in I need to create
those many number of regions.

I need to understand why in StatisticsMonitor and StatMonitorHandler
List is used to store monitors, listeners and statisticIds  , is there any
particular reason for this.

When I replace the List to com.gemstone.gemfire.internal.concurrent.
ConcurrentHashSet
I see significant improvement in creating large number of region creation.
( from ~7hrs to ~26 minutes)


Thanks
Avinash


On Thu, Nov 17, 2016 at 3:27 AM, Michael Stolz <ms...@pivotal.io> wrote:

> I can't think of any reason why any use case could need 1500 Regions.
>
> Regions are heavyweight constructs more similar in nature to Unix mounts
> than Unix directories.
>
> We usually use simple naming conventions for keys to simulate directory
> structures.
>
> So I would recommend that you create 1 Region, and store 1500 named hash
> maps into it.
>
> Make sense?
>
> --
> Mike Stolz
> Principal Engineer - Gemfire Product Manager
> Mobile: 631-835-4771
> On Nov 16, 2016 11:15 AM, "Avinash Dongre" <ad...@apache.org> wrote:
>
> > Hi,
> >
> > I am seeing strange performance issue when I want to create large number
> of
> > regions.
> > Test I am doing with 1500 Regions.
> >
> > ~ 2 minutes for creating first 1130 regions
> > ~7 minutes for creating remaining 370 regions.
> >
> > Following is example code :
> >
> > Cache geodeCache = this.createCache();
> > final String REGION_NAME = "Region_Name";
> > for ( int i = 0; i < 1500; i++) {
> >   RegionFactory<Object, Object> rf =
> > geodeCache.createRegionFactory(RegionShortcut.PARTITION_
> > PERSISTENT_OVERFLOW);
> >   rf.setDiskSynchronous(true);
> >   rf.setEvictionAttributes(EvictionAttributes.
> createLIFOEntryAttributes(1,
> > EvictionAction.OVERFLOW_TO_DISK));
> >   Region<Object, Object> region = rf.create(REGION_NAME + "_" + i);
> > }
> > geodeCache.close();
> >
> >
> > If I remove following code from createVMRegion ( GemFireCacheImpl.java ),
> > then I could create Regions in 2 minutes.
> >
> > if (!rgn.isInternalRegion()) {
> >   system.handleResourceEvent(ResourceEvent.REGION_CREATE, rgn);
> > }
> >
> > Any help or pointers why this is happening ?
> >
> >
> > Thanks
> >
> > Avinash
> >
>

Re: Strange Performance Issue with Large number Region Creation

Posted by Michael Stolz <ms...@pivotal.io>.
I can't think of any reason why any use case could need 1500 Regions.

Regions are heavyweight constructs more similar in nature to Unix mounts
than Unix directories.

We usually use simple naming conventions for keys to simulate directory
structures.

So I would recommend that you create 1 Region, and store 1500 named hash
maps into it.

Make sense?

--
Mike Stolz
Principal Engineer - Gemfire Product Manager
Mobile: 631-835-4771
On Nov 16, 2016 11:15 AM, "Avinash Dongre" <ad...@apache.org> wrote:

> Hi,
>
> I am seeing strange performance issue when I want to create large number of
> regions.
> Test I am doing with 1500 Regions.
>
> ~ 2 minutes for creating first 1130 regions
> ~7 minutes for creating remaining 370 regions.
>
> Following is example code :
>
> Cache geodeCache = this.createCache();
> final String REGION_NAME = "Region_Name";
> for ( int i = 0; i < 1500; i++) {
>   RegionFactory<Object, Object> rf =
> geodeCache.createRegionFactory(RegionShortcut.PARTITION_
> PERSISTENT_OVERFLOW);
>   rf.setDiskSynchronous(true);
>   rf.setEvictionAttributes(EvictionAttributes.createLIFOEntryAttributes(1,
> EvictionAction.OVERFLOW_TO_DISK));
>   Region<Object, Object> region = rf.create(REGION_NAME + "_" + i);
> }
> geodeCache.close();
>
>
> If I remove following code from createVMRegion ( GemFireCacheImpl.java ),
> then I could create Regions in 2 minutes.
>
> if (!rgn.isInternalRegion()) {
>   system.handleResourceEvent(ResourceEvent.REGION_CREATE, rgn);
> }
>
> Any help or pointers why this is happening ?
>
>
> Thanks
>
> Avinash
>