You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@geode.apache.org by Eric Pederson <er...@gmail.com> on 2015/09/29 00:33:39 UTC

Converting a client to peer

Thanks for the answers to my previous question about getting a callback if
the cluster goes down.  We decided to go with EndpointListener in the short
term as we’re still on Gemfire 7.0.2 (I forgot to mention that).  We’re
going to upgrade soon though and then we’ll move to ClientMembershipListener as
it’s a public API.



I have some related questions – here’s some background:  We have a cluster
of Gemfire servers and a number of Replicated regions.  We have a
microservice architecture where all of our applications are publishers for
some regions and clients for other regions.  We use CQs for most if not all
of the client scenarios.  Because of the CQ requirement all of our
applications are clients.



In one of these applications (called Trade Server) we would like to avoid
needing to have it reload its region in the cluster if the cluster goes
down completely and comes back up.  I discussed with my colleagues the
possibility of making the Trade Server a peer instead of a client.  It
could be a replica for its region and then it would not be impacted if the
main cluster went down.  And then when the cluster came back up Trade
Server would replicate its data back to it.  The only glitch is that it is
a client for other regions.  I told them that instead of using CQs in Trade
Server we could use CacheListeners (still determining whether any query is
more complicated than select * from /otherRegion).  They are hesitant
because they are attached to CQs.



Does this sound reasonable to you?



Something that has caused us a bit of pain in the past is the fact that one
JVM can either be a Client or a Peer, but not both.  And you can’t have
multiple instances of ClientCache since it uses statics.  The latter was a
problem in our microservices architecture as each service has its own
client API, but each client API can’t have its own ClientCache.  We worked
around it by wrapping ClientCache and making the wrapper API a singleton.
But there are still some gotchas, like if two services use different PDX
serialization configs, etc.



Is that something you have been thinking about fixing for the future?  That
is, making it so, in one JVM, you can have multiple clients/peers?   With
microservices becoming a bigger trend I think more people will want that.



Thanks,

-- Eric

Re: Converting a client to peer

Posted by Eric Pederson <er...@gmail.com>.
Cool - thanks Barry, I will put together a prototype and report back.

On Tuesday, October 13, 2015, Barry Oglesby <bo...@pivotal.io> wrote:

> Eric,
>
> This idea definitely works. Here are some parts of an example. If you want
> the whole thing, let me know.
>
> Create your client xml with 2 pools like:
>
> <client-cache>
>
>   <pool name="uat" subscription-enabled="true">
>     <locator host="localhost" port="12341"/>
>   </pool>
>
>   <pool name="prod" subscription-enabled="true">
>     <locator host="localhost" port="12342"/>
>   </pool>
>
> </client-cache>
>
> Then register your CQs against each pool like:
>
> private void registerCqs() throws Exception {
>   registerCq("uat");
>   registerCq("prod");
> }
>
> private void registerCq(String poolName) throws Exception {
>   // Get the query service
>   QueryService queryService = ((ClientCache)
> this.cache).getQueryService(poolName);
>
>   // Create CQ Attributes
>   CqAttributesFactory cqAf = new CqAttributesFactory();
>
>   // Initialize and set CqListener
>   CqListener[] cqListeners = {new TestCqListener(poolName)};
>   cqAf.initCqListeners(cqListeners);
>   CqAttributes cqa = cqAf.create();
>
>   // Construct a new CQ
>   String cqName = poolName + "_cq";
>   String cqQuery = "SELECT * FROM /data";
>   CqQuery cq = queryService.newCq(cqName, cqQuery, cqa);
>   cq.execute();
>   System.out.println("Registered pool=" + poolName + "; cq=" + cqName + ";
> query=" + cqQuery);
> }
>
>
> Barry Oglesby
> GemFire Advanced Customer Engineering (ACE)
> For immediate support please contact Pivotal Support at
> http://support.pivotal.io/
>
>
> On Tue, Oct 13, 2015 at 6:07 AM, Eric Pederson <ericacm@gmail.com
> <javascript:_e(%7B%7D,'cvml','ericacm@gmail.com');>> wrote:
>
>> Hi Anil - thanks, I will try that and get back to you.
>>
>>
>> -- Eric
>>
>> On Mon, Oct 12, 2015 at 6:21 PM, Anilkumar Gingade <agingade@pivotal.io
>> <javascript:_e(%7B%7D,'cvml','agingade@pivotal.io');>> wrote:
>>
>>> Are you looking at connecting client to multiple environments (servers
>>> in dev, UAT, prod...) and getting the events...If this is the case, one
>>> option to try is, create client connection pools to different environment
>>> and register CQs using those pools...(I haven't tried this, but I think its
>>> doable)...
>>>
>>> -Anil..
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> On Mon, Oct 12, 2015 at 1:44 PM, Eric Pederson <ericacm@gmail.com
>>> <javascript:_e(%7B%7D,'cvml','ericacm@gmail.com');>> wrote:
>>>
>>>> Hi all -
>>>>
>>>> I logged https://issues.apache.org/jira/browse/GEODE-395 as a feature
>>>> request to support multiple Caches per JVM.  One thing I forgot in my
>>>> earlier email and is probably the biggest pain point with the current
>>>> limitation is the ability to connect to multiple environments at the same
>>>> time.  For example, we will to connect to UAT for most services, but we'll
>>>> want to point one service in particular to Dev for debugging, or maybe
>>>> point it to Prod to get some live data.
>>>>
>>>> Thanks,
>>>>
>>>>
>>>> -- Eric
>>>>
>>>> On Wed, Sep 30, 2015 at 11:37 AM, Eric Pederson <ericacm@gmail.com
>>>> <javascript:_e(%7B%7D,'cvml','ericacm@gmail.com');>> wrote:
>>>>
>>>>> Hi Barry -
>>>>>
>>>>> The CQs are on other regions and they are doing puts on the main Trade
>>>>> region.  The Trade region is Replicated in the cluster and the Trade Server
>>>>> has a CACHING_PROXY client region.
>>>>>
>>>>> Thanks for the tip on the the CacheListener queue monitoring.
>>>>>
>>>>>
>>>>> -- Eric
>>>>>
>>>>> On Tue, Sep 29, 2015 at 7:32 PM, Barry Oglesby <boglesby@pivotal.io
>>>>> <javascript:_e(%7B%7D,'cvml','boglesby@pivotal.io');>> wrote:
>>>>>
>>>>>> One thing I wanted to clarify is how you're loading the data in the
>>>>>> Trade Server client now. Are you doing puts from the CqListener into a
>>>>>> local region?
>>>>>>
>>>>>> Also, one thing to be careful about with asynchronous CacheListeners
>>>>>> is they tend to hide memory usage if the thread pool can't keep up with the
>>>>>> tasks being executed. At the very least, make sure to monitor the size of
>>>>>> the thread pool's backing queue.
>>>>>>
>>>>>> Barry Oglesby
>>>>>> GemFire Advanced Customer Engineering (ACE)
>>>>>> For immediate support please contact Pivotal Support at
>>>>>> http://support.pivotal.io/
>>>>>>
>>>>>>
>>>>>> On Tue, Sep 29, 2015 at 6:06 AM, Eric Pederson <ericacm@gmail.com
>>>>>> <javascript:_e(%7B%7D,'cvml','ericacm@gmail.com');>> wrote:
>>>>>>
>>>>>>> Thanks Barry.  That makes a lot of sense.  With power comes great
>>>>>>> responsibility... It sounds like we would want to have the CacheListener be
>>>>>>> asynchronous, adding events to a queue that that the application code pulls
>>>>>>> from.
>>>>>>>
>>>>>>>
>>>>>>> -- Eric
>>>>>>>
>>>>>>> On Mon, Sep 28, 2015 at 10:06 PM, Barry Oglesby <boglesby@pivotal.io
>>>>>>> <javascript:_e(%7B%7D,'cvml','boglesby@pivotal.io');>> wrote:
>>>>>>>
>>>>>>>> The big difference between a peer and a client is that the peer is
>>>>>>>> a member of the distributed system whereas the client is not. This means,
>>>>>>>> among other things, that CacheListener callbacks are synchronous with the
>>>>>>>> original operation whereas CqListener callbacks are not. When the Trade
>>>>>>>> Server peer is started, your application put performance may degrade
>>>>>>>> depending on what is done in the CacheListener callback.
>>>>>>>>
>>>>>>>> You'll have synchronous replication of data between the server and
>>>>>>>> peer as well, but if the client's queue is on a node remote to where the
>>>>>>>> operation occurs, then that is also a synchronous replication of data. So,
>>>>>>>> that more-or-less balances out.
>>>>>>>>
>>>>>>>> Also, the health of a Trade Server peer can affect the other
>>>>>>>> distributed system members to a greater degree than a client. For example,
>>>>>>>> operations being replicated to the Trade Server peer will be impacted if a
>>>>>>>> long GC is occurring in it.
>>>>>>>>
>>>>>>>>
>>>>>>>> Barry Oglesby
>>>>>>>> GemFire Advanced Customer Engineering (ACE)
>>>>>>>> For immediate support please contact Pivotal Support at
>>>>>>>> http://support.pivotal.io/
>>>>>>>>
>>>>>>>>
>>>>>>>> On Mon, Sep 28, 2015 at 3:33 PM, Eric Pederson <ericacm@gmail.com
>>>>>>>> <javascript:_e(%7B%7D,'cvml','ericacm@gmail.com');>> wrote:
>>>>>>>>
>>>>>>>>> Thanks for the answers to my previous question about getting a
>>>>>>>>> callback if the cluster goes down.  We decided to go with
>>>>>>>>> EndpointListener in the short term as we’re still on Gemfire
>>>>>>>>> 7.0.2 (I forgot to mention that).  We’re going to upgrade soon though and
>>>>>>>>> then we’ll move to ClientMembershipListener as it’s a public API.
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> I have some related questions – here’s some background:  We have a
>>>>>>>>> cluster of Gemfire servers and a number of Replicated regions.  We have a
>>>>>>>>> microservice architecture where all of our applications are publishers for
>>>>>>>>> some regions and clients for other regions.  We use CQs for most if not all
>>>>>>>>> of the client scenarios.  Because of the CQ requirement all of our
>>>>>>>>> applications are clients.
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> In one of these applications (called Trade Server) we would like
>>>>>>>>> to avoid needing to have it reload its region in the cluster if the cluster
>>>>>>>>> goes down completely and comes back up.  I discussed with my colleagues the
>>>>>>>>> possibility of making the Trade Server a peer instead of a client.  It
>>>>>>>>> could be a replica for its region and then it would not be impacted if the
>>>>>>>>> main cluster went down.  And then when the cluster came back up Trade
>>>>>>>>> Server would replicate its data back to it.  The only glitch is that it is
>>>>>>>>> a client for other regions.  I told them that instead of using CQs in Trade
>>>>>>>>> Server we could use CacheListeners (still determining whether any
>>>>>>>>> query is more complicated than select * from /otherRegion).  They
>>>>>>>>> are hesitant because they are attached to CQs.
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> Does this sound reasonable to you?
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> Something that has caused us a bit of pain in the past is the fact
>>>>>>>>> that one JVM can either be a Client or a Peer, but not both.  And you can’t
>>>>>>>>> have multiple instances of ClientCache since it uses statics.
>>>>>>>>> The latter was a problem in our microservices architecture as each service
>>>>>>>>> has its own client API, but each client API can’t have its own
>>>>>>>>> ClientCache.  We worked around it by wrapping ClientCache and
>>>>>>>>> making the wrapper API a singleton.  But there are still some gotchas, like
>>>>>>>>> if two services use different PDX serialization configs, etc.
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> Is that something you have been thinking about fixing for the
>>>>>>>>> future?  That is, making it so, in one JVM, you can have multiple
>>>>>>>>> clients/peers?   With microservices becoming a bigger trend I think more
>>>>>>>>> people will want that.
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> Thanks,
>>>>>>>>>
>>>>>>>>> -- Eric
>>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>
>

-- 
Sent from Gmail Mobile

Re: Converting a client to peer

Posted by Anilkumar Gingade <ag...@pivotal.io>.
Do we have usecases requiring multiple caches?

We also need to look from simplicity aspect; supporting multiple caches may
add complication with respect to:
- Accessibility: do we need to support same name-space (region names)
across the caches...does application need to refer regions via individual
cache reference...
- Data replication:  does data replication supported between two caches
within the same jvm. WAN replication for multiple caches in same jvm.
- client-cache (near cache): does client needs to support multiple cache to
access data, or register interest/cqs..

-Anil.




On Wed, Oct 14, 2015 at 10:16 AM, John Blum <jb...@pivotal.io> wrote:

> In addition to statics, there is never any excuse for poor encapsulation
> of state (especially given modern JVM optimizations), which not only makes
> testing more difficult, but also makes a class less extensible.
>
> On Wed, Oct 14, 2015 at 10:13 AM, Dan Smith <ds...@pivotal.io> wrote:
>
>> +1 for getting rid of statics!
>>
>> The static cache also makes it harder to write tests that mock a cache,
>> or have multiple caches in a VM.
>>
>> -Dan
>>
>> On Wed, Oct 14, 2015 at 9:52 AM, Sergey Shcherbakov <
>> sshcherbakov@pivotal.io> wrote:
>>
>>> The use of static state for the Geode cache in a JVM process is a
>>> terribly limiting factor
>>> and there is not much excuse to have that in Geode nowadays.
>>> We had to fight hard this limitation in many projects.
>>>
>>> So, voting up for the GEODE-395
>>> <https://issues.apache.org/jira/browse/GEODE-395> !
>>>
>>>
>>>
>>> Best Regards,
>>> Sergey
>>> On Tue, Oct 13, 2015 at 8:33 PM, Barry Oglesby <bo...@pivotal.io>
>>> wrote:
>>>
>>>> Eric,
>>>>
>>>> This idea definitely works. Here are some parts of an example. If you
>>>> want the whole thing, let me know.
>>>>
>>>> Create your client xml with 2 pools like:
>>>>
>>>> <client-cache>
>>>>
>>>>   <pool name="uat" subscription-enabled="true">
>>>>     <locator host="localhost" port="12341"/>
>>>>   </pool>
>>>>
>>>>   <pool name="prod" subscription-enabled="true">
>>>>     <locator host="localhost" port="12342"/>
>>>>   </pool>
>>>>
>>>> </client-cache>
>>>>
>>>> Then register your CQs against each pool like:
>>>>
>>>> private void registerCqs() throws Exception {
>>>>   registerCq("uat");
>>>>   registerCq("prod");
>>>> }
>>>>
>>>> private void registerCq(String poolName) throws Exception {
>>>>   // Get the query service
>>>>   QueryService queryService = ((ClientCache)
>>>> this.cache).getQueryService(poolName);
>>>>
>>>>   // Create CQ Attributes
>>>>   CqAttributesFactory cqAf = new CqAttributesFactory();
>>>>
>>>>   // Initialize and set CqListener
>>>>   CqListener[] cqListeners = {new TestCqListener(poolName)};
>>>>   cqAf.initCqListeners(cqListeners);
>>>>   CqAttributes cqa = cqAf.create();
>>>>
>>>>   // Construct a new CQ
>>>>   String cqName = poolName + "_cq";
>>>>   String cqQuery = "SELECT * FROM /data";
>>>>   CqQuery cq = queryService.newCq(cqName, cqQuery, cqa);
>>>>   cq.execute();
>>>>   System.out.println("Registered pool=" + poolName + "; cq=" + cqName +
>>>> "; query=" + cqQuery);
>>>> }
>>>>
>>>>
>>>> Barry Oglesby
>>>> GemFire Advanced Customer Engineering (ACE)
>>>> For immediate support please contact Pivotal Support at
>>>> http://support.pivotal.io/
>>>>
>>>>
>>>> On Tue, Oct 13, 2015 at 6:07 AM, Eric Pederson <er...@gmail.com>
>>>> wrote:
>>>>
>>>>> Hi Anil - thanks, I will try that and get back to you.
>>>>>
>>>>>
>>>>> -- Eric
>>>>>
>>>>> On Mon, Oct 12, 2015 at 6:21 PM, Anilkumar Gingade <
>>>>> agingade@pivotal.io> wrote:
>>>>>
>>>>>> Are you looking at connecting client to multiple environments
>>>>>> (servers in dev, UAT, prod...) and getting the events...If this is the
>>>>>> case, one option to try is, create client connection pools to different
>>>>>> environment and register CQs using those pools...(I haven't tried this, but
>>>>>> I think its doable)...
>>>>>>
>>>>>> -Anil..
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> On Mon, Oct 12, 2015 at 1:44 PM, Eric Pederson <er...@gmail.com>
>>>>>> wrote:
>>>>>>
>>>>>>> Hi all -
>>>>>>>
>>>>>>> I logged https://issues.apache.org/jira/browse/GEODE-395 as a
>>>>>>> feature request to support multiple Caches per JVM.  One thing I forgot in
>>>>>>> my earlier email and is probably the biggest pain point with the current
>>>>>>> limitation is the ability to connect to multiple environments at the same
>>>>>>> time.  For example, we will to connect to UAT for most services, but we'll
>>>>>>> want to point one service in particular to Dev for debugging, or maybe
>>>>>>> point it to Prod to get some live data.
>>>>>>>
>>>>>>> Thanks,
>>>>>>>
>>>>>>>
>>>>>>> -- Eric
>>>>>>>
>>>>>>> On Wed, Sep 30, 2015 at 11:37 AM, Eric Pederson <er...@gmail.com>
>>>>>>> wrote:
>>>>>>>
>>>>>>>> Hi Barry -
>>>>>>>>
>>>>>>>> The CQs are on other regions and they are doing puts on the main
>>>>>>>> Trade region.  The Trade region is Replicated in the cluster and the Trade
>>>>>>>> Server has a CACHING_PROXY client region.
>>>>>>>>
>>>>>>>> Thanks for the tip on the the CacheListener queue monitoring.
>>>>>>>>
>>>>>>>>
>>>>>>>> -- Eric
>>>>>>>>
>>>>>>>> On Tue, Sep 29, 2015 at 7:32 PM, Barry Oglesby <boglesby@pivotal.io
>>>>>>>> > wrote:
>>>>>>>>
>>>>>>>>> One thing I wanted to clarify is how you're loading the data in
>>>>>>>>> the Trade Server client now. Are you doing puts from the CqListener into a
>>>>>>>>> local region?
>>>>>>>>>
>>>>>>>>> Also, one thing to be careful about with asynchronous
>>>>>>>>> CacheListeners is they tend to hide memory usage if the thread pool can't
>>>>>>>>> keep up with the tasks being executed. At the very least, make sure to
>>>>>>>>> monitor the size of the thread pool's backing queue.
>>>>>>>>>
>>>>>>>>> Barry Oglesby
>>>>>>>>> GemFire Advanced Customer Engineering (ACE)
>>>>>>>>> For immediate support please contact Pivotal Support at
>>>>>>>>> http://support.pivotal.io/
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On Tue, Sep 29, 2015 at 6:06 AM, Eric Pederson <er...@gmail.com>
>>>>>>>>> wrote:
>>>>>>>>>
>>>>>>>>>> Thanks Barry.  That makes a lot of sense.  With power comes great
>>>>>>>>>> responsibility... It sounds like we would want to have the CacheListener be
>>>>>>>>>> asynchronous, adding events to a queue that that the application code pulls
>>>>>>>>>> from.
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> -- Eric
>>>>>>>>>>
>>>>>>>>>> On Mon, Sep 28, 2015 at 10:06 PM, Barry Oglesby <
>>>>>>>>>> boglesby@pivotal.io> wrote:
>>>>>>>>>>
>>>>>>>>>>> The big difference between a peer and a client is that the peer
>>>>>>>>>>> is a member of the distributed system whereas the client is not. This
>>>>>>>>>>> means, among other things, that CacheListener callbacks are synchronous
>>>>>>>>>>> with the original operation whereas CqListener callbacks are not. When the
>>>>>>>>>>> Trade Server peer is started, your application put performance may degrade
>>>>>>>>>>> depending on what is done in the CacheListener callback.
>>>>>>>>>>>
>>>>>>>>>>> You'll have synchronous replication of data between the server
>>>>>>>>>>> and peer as well, but if the client's queue is on a node remote to where
>>>>>>>>>>> the operation occurs, then that is also a synchronous replication of data.
>>>>>>>>>>> So, that more-or-less balances out.
>>>>>>>>>>>
>>>>>>>>>>> Also, the health of a Trade Server peer can affect the other
>>>>>>>>>>> distributed system members to a greater degree than a client. For example,
>>>>>>>>>>> operations being replicated to the Trade Server peer will be impacted if a
>>>>>>>>>>> long GC is occurring in it.
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> Barry Oglesby
>>>>>>>>>>> GemFire Advanced Customer Engineering (ACE)
>>>>>>>>>>> For immediate support please contact Pivotal Support at
>>>>>>>>>>> http://support.pivotal.io/
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> On Mon, Sep 28, 2015 at 3:33 PM, Eric Pederson <
>>>>>>>>>>> ericacm@gmail.com> wrote:
>>>>>>>>>>>
>>>>>>>>>>>> Thanks for the answers to my previous question about getting a
>>>>>>>>>>>> callback if the cluster goes down.  We decided to go with
>>>>>>>>>>>> EndpointListener in the short term as we’re still on Gemfire
>>>>>>>>>>>> 7.0.2 (I forgot to mention that).  We’re going to upgrade soon though and
>>>>>>>>>>>> then we’ll move to ClientMembershipListener as it’s a public
>>>>>>>>>>>> API.
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> I have some related questions – here’s some background:  We
>>>>>>>>>>>> have a cluster of Gemfire servers and a number of Replicated regions.  We
>>>>>>>>>>>> have a microservice architecture where all of our applications are
>>>>>>>>>>>> publishers for some regions and clients for other regions.  We use CQs for
>>>>>>>>>>>> most if not all of the client scenarios.  Because of the CQ requirement all
>>>>>>>>>>>> of our applications are clients.
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> In one of these applications (called Trade Server) we would
>>>>>>>>>>>> like to avoid needing to have it reload its region in the cluster if the
>>>>>>>>>>>> cluster goes down completely and comes back up.  I discussed with my
>>>>>>>>>>>> colleagues the possibility of making the Trade Server a peer instead of a
>>>>>>>>>>>> client.  It could be a replica for its region and then it would not be
>>>>>>>>>>>> impacted if the main cluster went down.  And then when the cluster came
>>>>>>>>>>>> back up Trade Server would replicate its data back to it.  The only glitch
>>>>>>>>>>>> is that it is a client for other regions.  I told them that instead of
>>>>>>>>>>>> using CQs in Trade Server we could use CacheListeners (still
>>>>>>>>>>>> determining whether any query is more complicated than select
>>>>>>>>>>>> * from /otherRegion).  They are hesitant because they are
>>>>>>>>>>>> attached to CQs.
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> Does this sound reasonable to you?
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> Something that has caused us a bit of pain in the past is the
>>>>>>>>>>>> fact that one JVM can either be a Client or a Peer, but not both.  And you
>>>>>>>>>>>> can’t have multiple instances of ClientCache since it uses
>>>>>>>>>>>> statics.  The latter was a problem in our microservices architecture as
>>>>>>>>>>>> each service has its own client API, but each client API can’t have its own
>>>>>>>>>>>> ClientCache.  We worked around it by wrapping ClientCache and
>>>>>>>>>>>> making the wrapper API a singleton.  But there are still some gotchas, like
>>>>>>>>>>>> if two services use different PDX serialization configs, etc.
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> Is that something you have been thinking about fixing for the
>>>>>>>>>>>> future?  That is, making it so, in one JVM, you can have multiple
>>>>>>>>>>>> clients/peers?   With microservices becoming a bigger trend I think more
>>>>>>>>>>>> people will want that.
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> Thanks,
>>>>>>>>>>>>
>>>>>>>>>>>> -- Eric
>>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>
>
>
> --
> -John
> 503-504-8657
> john.blum10101 (skype)
>

Re: Converting a client to peer

Posted by John Blum <jb...@pivotal.io>.
In addition to statics, there is never any excuse for poor encapsulation of
state (especially given modern JVM optimizations), which not only makes
testing more difficult, but also makes a class less extensible.

On Wed, Oct 14, 2015 at 10:13 AM, Dan Smith <ds...@pivotal.io> wrote:

> +1 for getting rid of statics!
>
> The static cache also makes it harder to write tests that mock a cache, or
> have multiple caches in a VM.
>
> -Dan
>
> On Wed, Oct 14, 2015 at 9:52 AM, Sergey Shcherbakov <
> sshcherbakov@pivotal.io> wrote:
>
>> The use of static state for the Geode cache in a JVM process is a
>> terribly limiting factor
>> and there is not much excuse to have that in Geode nowadays.
>> We had to fight hard this limitation in many projects.
>>
>> So, voting up for the GEODE-395
>> <https://issues.apache.org/jira/browse/GEODE-395> !
>>
>>
>>
>> Best Regards,
>> Sergey
>> On Tue, Oct 13, 2015 at 8:33 PM, Barry Oglesby <bo...@pivotal.io>
>> wrote:
>>
>>> Eric,
>>>
>>> This idea definitely works. Here are some parts of an example. If you
>>> want the whole thing, let me know.
>>>
>>> Create your client xml with 2 pools like:
>>>
>>> <client-cache>
>>>
>>>   <pool name="uat" subscription-enabled="true">
>>>     <locator host="localhost" port="12341"/>
>>>   </pool>
>>>
>>>   <pool name="prod" subscription-enabled="true">
>>>     <locator host="localhost" port="12342"/>
>>>   </pool>
>>>
>>> </client-cache>
>>>
>>> Then register your CQs against each pool like:
>>>
>>> private void registerCqs() throws Exception {
>>>   registerCq("uat");
>>>   registerCq("prod");
>>> }
>>>
>>> private void registerCq(String poolName) throws Exception {
>>>   // Get the query service
>>>   QueryService queryService = ((ClientCache)
>>> this.cache).getQueryService(poolName);
>>>
>>>   // Create CQ Attributes
>>>   CqAttributesFactory cqAf = new CqAttributesFactory();
>>>
>>>   // Initialize and set CqListener
>>>   CqListener[] cqListeners = {new TestCqListener(poolName)};
>>>   cqAf.initCqListeners(cqListeners);
>>>   CqAttributes cqa = cqAf.create();
>>>
>>>   // Construct a new CQ
>>>   String cqName = poolName + "_cq";
>>>   String cqQuery = "SELECT * FROM /data";
>>>   CqQuery cq = queryService.newCq(cqName, cqQuery, cqa);
>>>   cq.execute();
>>>   System.out.println("Registered pool=" + poolName + "; cq=" + cqName +
>>> "; query=" + cqQuery);
>>> }
>>>
>>>
>>> Barry Oglesby
>>> GemFire Advanced Customer Engineering (ACE)
>>> For immediate support please contact Pivotal Support at
>>> http://support.pivotal.io/
>>>
>>>
>>> On Tue, Oct 13, 2015 at 6:07 AM, Eric Pederson <er...@gmail.com>
>>> wrote:
>>>
>>>> Hi Anil - thanks, I will try that and get back to you.
>>>>
>>>>
>>>> -- Eric
>>>>
>>>> On Mon, Oct 12, 2015 at 6:21 PM, Anilkumar Gingade <agingade@pivotal.io
>>>> > wrote:
>>>>
>>>>> Are you looking at connecting client to multiple environments (servers
>>>>> in dev, UAT, prod...) and getting the events...If this is the case, one
>>>>> option to try is, create client connection pools to different environment
>>>>> and register CQs using those pools...(I haven't tried this, but I think its
>>>>> doable)...
>>>>>
>>>>> -Anil..
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> On Mon, Oct 12, 2015 at 1:44 PM, Eric Pederson <er...@gmail.com>
>>>>> wrote:
>>>>>
>>>>>> Hi all -
>>>>>>
>>>>>> I logged https://issues.apache.org/jira/browse/GEODE-395 as a
>>>>>> feature request to support multiple Caches per JVM.  One thing I forgot in
>>>>>> my earlier email and is probably the biggest pain point with the current
>>>>>> limitation is the ability to connect to multiple environments at the same
>>>>>> time.  For example, we will to connect to UAT for most services, but we'll
>>>>>> want to point one service in particular to Dev for debugging, or maybe
>>>>>> point it to Prod to get some live data.
>>>>>>
>>>>>> Thanks,
>>>>>>
>>>>>>
>>>>>> -- Eric
>>>>>>
>>>>>> On Wed, Sep 30, 2015 at 11:37 AM, Eric Pederson <er...@gmail.com>
>>>>>> wrote:
>>>>>>
>>>>>>> Hi Barry -
>>>>>>>
>>>>>>> The CQs are on other regions and they are doing puts on the main
>>>>>>> Trade region.  The Trade region is Replicated in the cluster and the Trade
>>>>>>> Server has a CACHING_PROXY client region.
>>>>>>>
>>>>>>> Thanks for the tip on the the CacheListener queue monitoring.
>>>>>>>
>>>>>>>
>>>>>>> -- Eric
>>>>>>>
>>>>>>> On Tue, Sep 29, 2015 at 7:32 PM, Barry Oglesby <bo...@pivotal.io>
>>>>>>> wrote:
>>>>>>>
>>>>>>>> One thing I wanted to clarify is how you're loading the data in the
>>>>>>>> Trade Server client now. Are you doing puts from the CqListener into a
>>>>>>>> local region?
>>>>>>>>
>>>>>>>> Also, one thing to be careful about with asynchronous
>>>>>>>> CacheListeners is they tend to hide memory usage if the thread pool can't
>>>>>>>> keep up with the tasks being executed. At the very least, make sure to
>>>>>>>> monitor the size of the thread pool's backing queue.
>>>>>>>>
>>>>>>>> Barry Oglesby
>>>>>>>> GemFire Advanced Customer Engineering (ACE)
>>>>>>>> For immediate support please contact Pivotal Support at
>>>>>>>> http://support.pivotal.io/
>>>>>>>>
>>>>>>>>
>>>>>>>> On Tue, Sep 29, 2015 at 6:06 AM, Eric Pederson <er...@gmail.com>
>>>>>>>> wrote:
>>>>>>>>
>>>>>>>>> Thanks Barry.  That makes a lot of sense.  With power comes great
>>>>>>>>> responsibility... It sounds like we would want to have the CacheListener be
>>>>>>>>> asynchronous, adding events to a queue that that the application code pulls
>>>>>>>>> from.
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> -- Eric
>>>>>>>>>
>>>>>>>>> On Mon, Sep 28, 2015 at 10:06 PM, Barry Oglesby <
>>>>>>>>> boglesby@pivotal.io> wrote:
>>>>>>>>>
>>>>>>>>>> The big difference between a peer and a client is that the peer
>>>>>>>>>> is a member of the distributed system whereas the client is not. This
>>>>>>>>>> means, among other things, that CacheListener callbacks are synchronous
>>>>>>>>>> with the original operation whereas CqListener callbacks are not. When the
>>>>>>>>>> Trade Server peer is started, your application put performance may degrade
>>>>>>>>>> depending on what is done in the CacheListener callback.
>>>>>>>>>>
>>>>>>>>>> You'll have synchronous replication of data between the server
>>>>>>>>>> and peer as well, but if the client's queue is on a node remote to where
>>>>>>>>>> the operation occurs, then that is also a synchronous replication of data.
>>>>>>>>>> So, that more-or-less balances out.
>>>>>>>>>>
>>>>>>>>>> Also, the health of a Trade Server peer can affect the other
>>>>>>>>>> distributed system members to a greater degree than a client. For example,
>>>>>>>>>> operations being replicated to the Trade Server peer will be impacted if a
>>>>>>>>>> long GC is occurring in it.
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> Barry Oglesby
>>>>>>>>>> GemFire Advanced Customer Engineering (ACE)
>>>>>>>>>> For immediate support please contact Pivotal Support at
>>>>>>>>>> http://support.pivotal.io/
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> On Mon, Sep 28, 2015 at 3:33 PM, Eric Pederson <ericacm@gmail.com
>>>>>>>>>> > wrote:
>>>>>>>>>>
>>>>>>>>>>> Thanks for the answers to my previous question about getting a
>>>>>>>>>>> callback if the cluster goes down.  We decided to go with
>>>>>>>>>>> EndpointListener in the short term as we’re still on Gemfire
>>>>>>>>>>> 7.0.2 (I forgot to mention that).  We’re going to upgrade soon though and
>>>>>>>>>>> then we’ll move to ClientMembershipListener as it’s a public
>>>>>>>>>>> API.
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> I have some related questions – here’s some background:  We have
>>>>>>>>>>> a cluster of Gemfire servers and a number of Replicated regions.  We have a
>>>>>>>>>>> microservice architecture where all of our applications are publishers for
>>>>>>>>>>> some regions and clients for other regions.  We use CQs for most if not all
>>>>>>>>>>> of the client scenarios.  Because of the CQ requirement all of our
>>>>>>>>>>> applications are clients.
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> In one of these applications (called Trade Server) we would like
>>>>>>>>>>> to avoid needing to have it reload its region in the cluster if the cluster
>>>>>>>>>>> goes down completely and comes back up.  I discussed with my colleagues the
>>>>>>>>>>> possibility of making the Trade Server a peer instead of a client.  It
>>>>>>>>>>> could be a replica for its region and then it would not be impacted if the
>>>>>>>>>>> main cluster went down.  And then when the cluster came back up Trade
>>>>>>>>>>> Server would replicate its data back to it.  The only glitch is that it is
>>>>>>>>>>> a client for other regions.  I told them that instead of using CQs in Trade
>>>>>>>>>>> Server we could use CacheListeners (still determining whether
>>>>>>>>>>> any query is more complicated than select * from /otherRegion).
>>>>>>>>>>> They are hesitant because they are attached to CQs.
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> Does this sound reasonable to you?
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> Something that has caused us a bit of pain in the past is the
>>>>>>>>>>> fact that one JVM can either be a Client or a Peer, but not both.  And you
>>>>>>>>>>> can’t have multiple instances of ClientCache since it uses
>>>>>>>>>>> statics.  The latter was a problem in our microservices architecture as
>>>>>>>>>>> each service has its own client API, but each client API can’t have its own
>>>>>>>>>>> ClientCache.  We worked around it by wrapping ClientCache and
>>>>>>>>>>> making the wrapper API a singleton.  But there are still some gotchas, like
>>>>>>>>>>> if two services use different PDX serialization configs, etc.
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> Is that something you have been thinking about fixing for the
>>>>>>>>>>> future?  That is, making it so, in one JVM, you can have multiple
>>>>>>>>>>> clients/peers?   With microservices becoming a bigger trend I think more
>>>>>>>>>>> people will want that.
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> Thanks,
>>>>>>>>>>>
>>>>>>>>>>> -- Eric
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>
>


-- 
-John
503-504-8657
john.blum10101 (skype)

Re: Converting a client to peer

Posted by Dan Smith <ds...@pivotal.io>.
+1 for getting rid of statics!

The static cache also makes it harder to write tests that mock a cache, or
have multiple caches in a VM.

-Dan

On Wed, Oct 14, 2015 at 9:52 AM, Sergey Shcherbakov <sshcherbakov@pivotal.io
> wrote:

> The use of static state for the Geode cache in a JVM process is a terribly
> limiting factor
> and there is not much excuse to have that in Geode nowadays.
> We had to fight hard this limitation in many projects.
>
> So, voting up for the GEODE-395
> <https://issues.apache.org/jira/browse/GEODE-395> !
>
>
>
> Best Regards,
> Sergey
> On Tue, Oct 13, 2015 at 8:33 PM, Barry Oglesby <bo...@pivotal.io>
> wrote:
>
>> Eric,
>>
>> This idea definitely works. Here are some parts of an example. If you
>> want the whole thing, let me know.
>>
>> Create your client xml with 2 pools like:
>>
>> <client-cache>
>>
>>   <pool name="uat" subscription-enabled="true">
>>     <locator host="localhost" port="12341"/>
>>   </pool>
>>
>>   <pool name="prod" subscription-enabled="true">
>>     <locator host="localhost" port="12342"/>
>>   </pool>
>>
>> </client-cache>
>>
>> Then register your CQs against each pool like:
>>
>> private void registerCqs() throws Exception {
>>   registerCq("uat");
>>   registerCq("prod");
>> }
>>
>> private void registerCq(String poolName) throws Exception {
>>   // Get the query service
>>   QueryService queryService = ((ClientCache)
>> this.cache).getQueryService(poolName);
>>
>>   // Create CQ Attributes
>>   CqAttributesFactory cqAf = new CqAttributesFactory();
>>
>>   // Initialize and set CqListener
>>   CqListener[] cqListeners = {new TestCqListener(poolName)};
>>   cqAf.initCqListeners(cqListeners);
>>   CqAttributes cqa = cqAf.create();
>>
>>   // Construct a new CQ
>>   String cqName = poolName + "_cq";
>>   String cqQuery = "SELECT * FROM /data";
>>   CqQuery cq = queryService.newCq(cqName, cqQuery, cqa);
>>   cq.execute();
>>   System.out.println("Registered pool=" + poolName + "; cq=" + cqName +
>> "; query=" + cqQuery);
>> }
>>
>>
>> Barry Oglesby
>> GemFire Advanced Customer Engineering (ACE)
>> For immediate support please contact Pivotal Support at
>> http://support.pivotal.io/
>>
>>
>> On Tue, Oct 13, 2015 at 6:07 AM, Eric Pederson <er...@gmail.com> wrote:
>>
>>> Hi Anil - thanks, I will try that and get back to you.
>>>
>>>
>>> -- Eric
>>>
>>> On Mon, Oct 12, 2015 at 6:21 PM, Anilkumar Gingade <ag...@pivotal.io>
>>> wrote:
>>>
>>>> Are you looking at connecting client to multiple environments (servers
>>>> in dev, UAT, prod...) and getting the events...If this is the case, one
>>>> option to try is, create client connection pools to different environment
>>>> and register CQs using those pools...(I haven't tried this, but I think its
>>>> doable)...
>>>>
>>>> -Anil..
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> On Mon, Oct 12, 2015 at 1:44 PM, Eric Pederson <er...@gmail.com>
>>>> wrote:
>>>>
>>>>> Hi all -
>>>>>
>>>>> I logged https://issues.apache.org/jira/browse/GEODE-395 as a feature
>>>>> request to support multiple Caches per JVM.  One thing I forgot in my
>>>>> earlier email and is probably the biggest pain point with the current
>>>>> limitation is the ability to connect to multiple environments at the same
>>>>> time.  For example, we will to connect to UAT for most services, but we'll
>>>>> want to point one service in particular to Dev for debugging, or maybe
>>>>> point it to Prod to get some live data.
>>>>>
>>>>> Thanks,
>>>>>
>>>>>
>>>>> -- Eric
>>>>>
>>>>> On Wed, Sep 30, 2015 at 11:37 AM, Eric Pederson <er...@gmail.com>
>>>>> wrote:
>>>>>
>>>>>> Hi Barry -
>>>>>>
>>>>>> The CQs are on other regions and they are doing puts on the main
>>>>>> Trade region.  The Trade region is Replicated in the cluster and the Trade
>>>>>> Server has a CACHING_PROXY client region.
>>>>>>
>>>>>> Thanks for the tip on the the CacheListener queue monitoring.
>>>>>>
>>>>>>
>>>>>> -- Eric
>>>>>>
>>>>>> On Tue, Sep 29, 2015 at 7:32 PM, Barry Oglesby <bo...@pivotal.io>
>>>>>> wrote:
>>>>>>
>>>>>>> One thing I wanted to clarify is how you're loading the data in the
>>>>>>> Trade Server client now. Are you doing puts from the CqListener into a
>>>>>>> local region?
>>>>>>>
>>>>>>> Also, one thing to be careful about with asynchronous CacheListeners
>>>>>>> is they tend to hide memory usage if the thread pool can't keep up with the
>>>>>>> tasks being executed. At the very least, make sure to monitor the size of
>>>>>>> the thread pool's backing queue.
>>>>>>>
>>>>>>> Barry Oglesby
>>>>>>> GemFire Advanced Customer Engineering (ACE)
>>>>>>> For immediate support please contact Pivotal Support at
>>>>>>> http://support.pivotal.io/
>>>>>>>
>>>>>>>
>>>>>>> On Tue, Sep 29, 2015 at 6:06 AM, Eric Pederson <er...@gmail.com>
>>>>>>> wrote:
>>>>>>>
>>>>>>>> Thanks Barry.  That makes a lot of sense.  With power comes great
>>>>>>>> responsibility... It sounds like we would want to have the CacheListener be
>>>>>>>> asynchronous, adding events to a queue that that the application code pulls
>>>>>>>> from.
>>>>>>>>
>>>>>>>>
>>>>>>>> -- Eric
>>>>>>>>
>>>>>>>> On Mon, Sep 28, 2015 at 10:06 PM, Barry Oglesby <
>>>>>>>> boglesby@pivotal.io> wrote:
>>>>>>>>
>>>>>>>>> The big difference between a peer and a client is that the peer is
>>>>>>>>> a member of the distributed system whereas the client is not. This means,
>>>>>>>>> among other things, that CacheListener callbacks are synchronous with the
>>>>>>>>> original operation whereas CqListener callbacks are not. When the Trade
>>>>>>>>> Server peer is started, your application put performance may degrade
>>>>>>>>> depending on what is done in the CacheListener callback.
>>>>>>>>>
>>>>>>>>> You'll have synchronous replication of data between the server and
>>>>>>>>> peer as well, but if the client's queue is on a node remote to where the
>>>>>>>>> operation occurs, then that is also a synchronous replication of data. So,
>>>>>>>>> that more-or-less balances out.
>>>>>>>>>
>>>>>>>>> Also, the health of a Trade Server peer can affect the other
>>>>>>>>> distributed system members to a greater degree than a client. For example,
>>>>>>>>> operations being replicated to the Trade Server peer will be impacted if a
>>>>>>>>> long GC is occurring in it.
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> Barry Oglesby
>>>>>>>>> GemFire Advanced Customer Engineering (ACE)
>>>>>>>>> For immediate support please contact Pivotal Support at
>>>>>>>>> http://support.pivotal.io/
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On Mon, Sep 28, 2015 at 3:33 PM, Eric Pederson <er...@gmail.com>
>>>>>>>>> wrote:
>>>>>>>>>
>>>>>>>>>> Thanks for the answers to my previous question about getting a
>>>>>>>>>> callback if the cluster goes down.  We decided to go with
>>>>>>>>>> EndpointListener in the short term as we’re still on Gemfire
>>>>>>>>>> 7.0.2 (I forgot to mention that).  We’re going to upgrade soon though and
>>>>>>>>>> then we’ll move to ClientMembershipListener as it’s a public API.
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> I have some related questions – here’s some background:  We have
>>>>>>>>>> a cluster of Gemfire servers and a number of Replicated regions.  We have a
>>>>>>>>>> microservice architecture where all of our applications are publishers for
>>>>>>>>>> some regions and clients for other regions.  We use CQs for most if not all
>>>>>>>>>> of the client scenarios.  Because of the CQ requirement all of our
>>>>>>>>>> applications are clients.
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> In one of these applications (called Trade Server) we would like
>>>>>>>>>> to avoid needing to have it reload its region in the cluster if the cluster
>>>>>>>>>> goes down completely and comes back up.  I discussed with my colleagues the
>>>>>>>>>> possibility of making the Trade Server a peer instead of a client.  It
>>>>>>>>>> could be a replica for its region and then it would not be impacted if the
>>>>>>>>>> main cluster went down.  And then when the cluster came back up Trade
>>>>>>>>>> Server would replicate its data back to it.  The only glitch is that it is
>>>>>>>>>> a client for other regions.  I told them that instead of using CQs in Trade
>>>>>>>>>> Server we could use CacheListeners (still determining whether
>>>>>>>>>> any query is more complicated than select * from /otherRegion).
>>>>>>>>>> They are hesitant because they are attached to CQs.
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> Does this sound reasonable to you?
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> Something that has caused us a bit of pain in the past is the
>>>>>>>>>> fact that one JVM can either be a Client or a Peer, but not both.  And you
>>>>>>>>>> can’t have multiple instances of ClientCache since it uses
>>>>>>>>>> statics.  The latter was a problem in our microservices architecture as
>>>>>>>>>> each service has its own client API, but each client API can’t have its own
>>>>>>>>>> ClientCache.  We worked around it by wrapping ClientCache and
>>>>>>>>>> making the wrapper API a singleton.  But there are still some gotchas, like
>>>>>>>>>> if two services use different PDX serialization configs, etc.
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> Is that something you have been thinking about fixing for the
>>>>>>>>>> future?  That is, making it so, in one JVM, you can have multiple
>>>>>>>>>> clients/peers?   With microservices becoming a bigger trend I think more
>>>>>>>>>> people will want that.
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> Thanks,
>>>>>>>>>>
>>>>>>>>>> -- Eric
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>
>

Re: Converting a client to peer

Posted by Sergey Shcherbakov <ss...@pivotal.io>.
The use of static state for the Geode cache in a JVM process is a terribly
limiting factor
and there is not much excuse to have that in Geode nowadays.
We had to fight hard this limitation in many projects.

So, voting up for the GEODE-395
<https://issues.apache.org/jira/browse/GEODE-395> !



Best Regards,
Sergey
On Tue, Oct 13, 2015 at 8:33 PM, Barry Oglesby <bo...@pivotal.io> wrote:

> Eric,
>
> This idea definitely works. Here are some parts of an example. If you want
> the whole thing, let me know.
>
> Create your client xml with 2 pools like:
>
> <client-cache>
>
>   <pool name="uat" subscription-enabled="true">
>     <locator host="localhost" port="12341"/>
>   </pool>
>
>   <pool name="prod" subscription-enabled="true">
>     <locator host="localhost" port="12342"/>
>   </pool>
>
> </client-cache>
>
> Then register your CQs against each pool like:
>
> private void registerCqs() throws Exception {
>   registerCq("uat");
>   registerCq("prod");
> }
>
> private void registerCq(String poolName) throws Exception {
>   // Get the query service
>   QueryService queryService = ((ClientCache)
> this.cache).getQueryService(poolName);
>
>   // Create CQ Attributes
>   CqAttributesFactory cqAf = new CqAttributesFactory();
>
>   // Initialize and set CqListener
>   CqListener[] cqListeners = {new TestCqListener(poolName)};
>   cqAf.initCqListeners(cqListeners);
>   CqAttributes cqa = cqAf.create();
>
>   // Construct a new CQ
>   String cqName = poolName + "_cq";
>   String cqQuery = "SELECT * FROM /data";
>   CqQuery cq = queryService.newCq(cqName, cqQuery, cqa);
>   cq.execute();
>   System.out.println("Registered pool=" + poolName + "; cq=" + cqName + ";
> query=" + cqQuery);
> }
>
>
> Barry Oglesby
> GemFire Advanced Customer Engineering (ACE)
> For immediate support please contact Pivotal Support at
> http://support.pivotal.io/
>
>
> On Tue, Oct 13, 2015 at 6:07 AM, Eric Pederson <er...@gmail.com> wrote:
>
>> Hi Anil - thanks, I will try that and get back to you.
>>
>>
>> -- Eric
>>
>> On Mon, Oct 12, 2015 at 6:21 PM, Anilkumar Gingade <ag...@pivotal.io>
>> wrote:
>>
>>> Are you looking at connecting client to multiple environments (servers
>>> in dev, UAT, prod...) and getting the events...If this is the case, one
>>> option to try is, create client connection pools to different environment
>>> and register CQs using those pools...(I haven't tried this, but I think its
>>> doable)...
>>>
>>> -Anil..
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> On Mon, Oct 12, 2015 at 1:44 PM, Eric Pederson <er...@gmail.com>
>>> wrote:
>>>
>>>> Hi all -
>>>>
>>>> I logged https://issues.apache.org/jira/browse/GEODE-395 as a feature
>>>> request to support multiple Caches per JVM.  One thing I forgot in my
>>>> earlier email and is probably the biggest pain point with the current
>>>> limitation is the ability to connect to multiple environments at the same
>>>> time.  For example, we will to connect to UAT for most services, but we'll
>>>> want to point one service in particular to Dev for debugging, or maybe
>>>> point it to Prod to get some live data.
>>>>
>>>> Thanks,
>>>>
>>>>
>>>> -- Eric
>>>>
>>>> On Wed, Sep 30, 2015 at 11:37 AM, Eric Pederson <er...@gmail.com>
>>>> wrote:
>>>>
>>>>> Hi Barry -
>>>>>
>>>>> The CQs are on other regions and they are doing puts on the main Trade
>>>>> region.  The Trade region is Replicated in the cluster and the Trade Server
>>>>> has a CACHING_PROXY client region.
>>>>>
>>>>> Thanks for the tip on the the CacheListener queue monitoring.
>>>>>
>>>>>
>>>>> -- Eric
>>>>>
>>>>> On Tue, Sep 29, 2015 at 7:32 PM, Barry Oglesby <bo...@pivotal.io>
>>>>> wrote:
>>>>>
>>>>>> One thing I wanted to clarify is how you're loading the data in the
>>>>>> Trade Server client now. Are you doing puts from the CqListener into a
>>>>>> local region?
>>>>>>
>>>>>> Also, one thing to be careful about with asynchronous CacheListeners
>>>>>> is they tend to hide memory usage if the thread pool can't keep up with the
>>>>>> tasks being executed. At the very least, make sure to monitor the size of
>>>>>> the thread pool's backing queue.
>>>>>>
>>>>>> Barry Oglesby
>>>>>> GemFire Advanced Customer Engineering (ACE)
>>>>>> For immediate support please contact Pivotal Support at
>>>>>> http://support.pivotal.io/
>>>>>>
>>>>>>
>>>>>> On Tue, Sep 29, 2015 at 6:06 AM, Eric Pederson <er...@gmail.com>
>>>>>> wrote:
>>>>>>
>>>>>>> Thanks Barry.  That makes a lot of sense.  With power comes great
>>>>>>> responsibility... It sounds like we would want to have the CacheListener be
>>>>>>> asynchronous, adding events to a queue that that the application code pulls
>>>>>>> from.
>>>>>>>
>>>>>>>
>>>>>>> -- Eric
>>>>>>>
>>>>>>> On Mon, Sep 28, 2015 at 10:06 PM, Barry Oglesby <boglesby@pivotal.io
>>>>>>> > wrote:
>>>>>>>
>>>>>>>> The big difference between a peer and a client is that the peer is
>>>>>>>> a member of the distributed system whereas the client is not. This means,
>>>>>>>> among other things, that CacheListener callbacks are synchronous with the
>>>>>>>> original operation whereas CqListener callbacks are not. When the Trade
>>>>>>>> Server peer is started, your application put performance may degrade
>>>>>>>> depending on what is done in the CacheListener callback.
>>>>>>>>
>>>>>>>> You'll have synchronous replication of data between the server and
>>>>>>>> peer as well, but if the client's queue is on a node remote to where the
>>>>>>>> operation occurs, then that is also a synchronous replication of data. So,
>>>>>>>> that more-or-less balances out.
>>>>>>>>
>>>>>>>> Also, the health of a Trade Server peer can affect the other
>>>>>>>> distributed system members to a greater degree than a client. For example,
>>>>>>>> operations being replicated to the Trade Server peer will be impacted if a
>>>>>>>> long GC is occurring in it.
>>>>>>>>
>>>>>>>>
>>>>>>>> Barry Oglesby
>>>>>>>> GemFire Advanced Customer Engineering (ACE)
>>>>>>>> For immediate support please contact Pivotal Support at
>>>>>>>> http://support.pivotal.io/
>>>>>>>>
>>>>>>>>
>>>>>>>> On Mon, Sep 28, 2015 at 3:33 PM, Eric Pederson <er...@gmail.com>
>>>>>>>> wrote:
>>>>>>>>
>>>>>>>>> Thanks for the answers to my previous question about getting a
>>>>>>>>> callback if the cluster goes down.  We decided to go with
>>>>>>>>> EndpointListener in the short term as we’re still on Gemfire
>>>>>>>>> 7.0.2 (I forgot to mention that).  We’re going to upgrade soon though and
>>>>>>>>> then we’ll move to ClientMembershipListener as it’s a public API.
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> I have some related questions – here’s some background:  We have a
>>>>>>>>> cluster of Gemfire servers and a number of Replicated regions.  We have a
>>>>>>>>> microservice architecture where all of our applications are publishers for
>>>>>>>>> some regions and clients for other regions.  We use CQs for most if not all
>>>>>>>>> of the client scenarios.  Because of the CQ requirement all of our
>>>>>>>>> applications are clients.
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> In one of these applications (called Trade Server) we would like
>>>>>>>>> to avoid needing to have it reload its region in the cluster if the cluster
>>>>>>>>> goes down completely and comes back up.  I discussed with my colleagues the
>>>>>>>>> possibility of making the Trade Server a peer instead of a client.  It
>>>>>>>>> could be a replica for its region and then it would not be impacted if the
>>>>>>>>> main cluster went down.  And then when the cluster came back up Trade
>>>>>>>>> Server would replicate its data back to it.  The only glitch is that it is
>>>>>>>>> a client for other regions.  I told them that instead of using CQs in Trade
>>>>>>>>> Server we could use CacheListeners (still determining whether any
>>>>>>>>> query is more complicated than select * from /otherRegion).  They
>>>>>>>>> are hesitant because they are attached to CQs.
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> Does this sound reasonable to you?
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> Something that has caused us a bit of pain in the past is the fact
>>>>>>>>> that one JVM can either be a Client or a Peer, but not both.  And you can’t
>>>>>>>>> have multiple instances of ClientCache since it uses statics.
>>>>>>>>> The latter was a problem in our microservices architecture as each service
>>>>>>>>> has its own client API, but each client API can’t have its own
>>>>>>>>> ClientCache.  We worked around it by wrapping ClientCache and
>>>>>>>>> making the wrapper API a singleton.  But there are still some gotchas, like
>>>>>>>>> if two services use different PDX serialization configs, etc.
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> Is that something you have been thinking about fixing for the
>>>>>>>>> future?  That is, making it so, in one JVM, you can have multiple
>>>>>>>>> clients/peers?   With microservices becoming a bigger trend I think more
>>>>>>>>> people will want that.
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> Thanks,
>>>>>>>>>
>>>>>>>>> -- Eric
>>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>
>

Re: Converting a client to peer

Posted by Barry Oglesby <bo...@pivotal.io>.
Eric,

This idea definitely works. Here are some parts of an example. If you want
the whole thing, let me know.

Create your client xml with 2 pools like:

<client-cache>

  <pool name="uat" subscription-enabled="true">
    <locator host="localhost" port="12341"/>
  </pool>

  <pool name="prod" subscription-enabled="true">
    <locator host="localhost" port="12342"/>
  </pool>

</client-cache>

Then register your CQs against each pool like:

private void registerCqs() throws Exception {
  registerCq("uat");
  registerCq("prod");
}

private void registerCq(String poolName) throws Exception {
  // Get the query service
  QueryService queryService = ((ClientCache)
this.cache).getQueryService(poolName);

  // Create CQ Attributes
  CqAttributesFactory cqAf = new CqAttributesFactory();

  // Initialize and set CqListener
  CqListener[] cqListeners = {new TestCqListener(poolName)};
  cqAf.initCqListeners(cqListeners);
  CqAttributes cqa = cqAf.create();

  // Construct a new CQ
  String cqName = poolName + "_cq";
  String cqQuery = "SELECT * FROM /data";
  CqQuery cq = queryService.newCq(cqName, cqQuery, cqa);
  cq.execute();
  System.out.println("Registered pool=" + poolName + "; cq=" + cqName + ";
query=" + cqQuery);
}


Barry Oglesby
GemFire Advanced Customer Engineering (ACE)
For immediate support please contact Pivotal Support at
http://support.pivotal.io/


On Tue, Oct 13, 2015 at 6:07 AM, Eric Pederson <er...@gmail.com> wrote:

> Hi Anil - thanks, I will try that and get back to you.
>
>
> -- Eric
>
> On Mon, Oct 12, 2015 at 6:21 PM, Anilkumar Gingade <ag...@pivotal.io>
> wrote:
>
>> Are you looking at connecting client to multiple environments (servers in
>> dev, UAT, prod...) and getting the events...If this is the case, one option
>> to try is, create client connection pools to different environment and
>> register CQs using those pools...(I haven't tried this, but I think its
>> doable)...
>>
>> -Anil..
>>
>>
>>
>>
>>
>>
>>
>>
>>
>> On Mon, Oct 12, 2015 at 1:44 PM, Eric Pederson <er...@gmail.com> wrote:
>>
>>> Hi all -
>>>
>>> I logged https://issues.apache.org/jira/browse/GEODE-395 as a feature
>>> request to support multiple Caches per JVM.  One thing I forgot in my
>>> earlier email and is probably the biggest pain point with the current
>>> limitation is the ability to connect to multiple environments at the same
>>> time.  For example, we will to connect to UAT for most services, but we'll
>>> want to point one service in particular to Dev for debugging, or maybe
>>> point it to Prod to get some live data.
>>>
>>> Thanks,
>>>
>>>
>>> -- Eric
>>>
>>> On Wed, Sep 30, 2015 at 11:37 AM, Eric Pederson <er...@gmail.com>
>>> wrote:
>>>
>>>> Hi Barry -
>>>>
>>>> The CQs are on other regions and they are doing puts on the main Trade
>>>> region.  The Trade region is Replicated in the cluster and the Trade Server
>>>> has a CACHING_PROXY client region.
>>>>
>>>> Thanks for the tip on the the CacheListener queue monitoring.
>>>>
>>>>
>>>> -- Eric
>>>>
>>>> On Tue, Sep 29, 2015 at 7:32 PM, Barry Oglesby <bo...@pivotal.io>
>>>> wrote:
>>>>
>>>>> One thing I wanted to clarify is how you're loading the data in the
>>>>> Trade Server client now. Are you doing puts from the CqListener into a
>>>>> local region?
>>>>>
>>>>> Also, one thing to be careful about with asynchronous CacheListeners
>>>>> is they tend to hide memory usage if the thread pool can't keep up with the
>>>>> tasks being executed. At the very least, make sure to monitor the size of
>>>>> the thread pool's backing queue.
>>>>>
>>>>> Barry Oglesby
>>>>> GemFire Advanced Customer Engineering (ACE)
>>>>> For immediate support please contact Pivotal Support at
>>>>> http://support.pivotal.io/
>>>>>
>>>>>
>>>>> On Tue, Sep 29, 2015 at 6:06 AM, Eric Pederson <er...@gmail.com>
>>>>> wrote:
>>>>>
>>>>>> Thanks Barry.  That makes a lot of sense.  With power comes great
>>>>>> responsibility... It sounds like we would want to have the CacheListener be
>>>>>> asynchronous, adding events to a queue that that the application code pulls
>>>>>> from.
>>>>>>
>>>>>>
>>>>>> -- Eric
>>>>>>
>>>>>> On Mon, Sep 28, 2015 at 10:06 PM, Barry Oglesby <bo...@pivotal.io>
>>>>>> wrote:
>>>>>>
>>>>>>> The big difference between a peer and a client is that the peer is a
>>>>>>> member of the distributed system whereas the client is not. This means,
>>>>>>> among other things, that CacheListener callbacks are synchronous with the
>>>>>>> original operation whereas CqListener callbacks are not. When the Trade
>>>>>>> Server peer is started, your application put performance may degrade
>>>>>>> depending on what is done in the CacheListener callback.
>>>>>>>
>>>>>>> You'll have synchronous replication of data between the server and
>>>>>>> peer as well, but if the client's queue is on a node remote to where the
>>>>>>> operation occurs, then that is also a synchronous replication of data. So,
>>>>>>> that more-or-less balances out.
>>>>>>>
>>>>>>> Also, the health of a Trade Server peer can affect the other
>>>>>>> distributed system members to a greater degree than a client. For example,
>>>>>>> operations being replicated to the Trade Server peer will be impacted if a
>>>>>>> long GC is occurring in it.
>>>>>>>
>>>>>>>
>>>>>>> Barry Oglesby
>>>>>>> GemFire Advanced Customer Engineering (ACE)
>>>>>>> For immediate support please contact Pivotal Support at
>>>>>>> http://support.pivotal.io/
>>>>>>>
>>>>>>>
>>>>>>> On Mon, Sep 28, 2015 at 3:33 PM, Eric Pederson <er...@gmail.com>
>>>>>>> wrote:
>>>>>>>
>>>>>>>> Thanks for the answers to my previous question about getting a
>>>>>>>> callback if the cluster goes down.  We decided to go with
>>>>>>>> EndpointListener in the short term as we’re still on Gemfire 7.0.2
>>>>>>>> (I forgot to mention that).  We’re going to upgrade soon though and then
>>>>>>>> we’ll move to ClientMembershipListener as it’s a public API.
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> I have some related questions – here’s some background:  We have a
>>>>>>>> cluster of Gemfire servers and a number of Replicated regions.  We have a
>>>>>>>> microservice architecture where all of our applications are publishers for
>>>>>>>> some regions and clients for other regions.  We use CQs for most if not all
>>>>>>>> of the client scenarios.  Because of the CQ requirement all of our
>>>>>>>> applications are clients.
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> In one of these applications (called Trade Server) we would like to
>>>>>>>> avoid needing to have it reload its region in the cluster if the cluster
>>>>>>>> goes down completely and comes back up.  I discussed with my colleagues the
>>>>>>>> possibility of making the Trade Server a peer instead of a client.  It
>>>>>>>> could be a replica for its region and then it would not be impacted if the
>>>>>>>> main cluster went down.  And then when the cluster came back up Trade
>>>>>>>> Server would replicate its data back to it.  The only glitch is that it is
>>>>>>>> a client for other regions.  I told them that instead of using CQs in Trade
>>>>>>>> Server we could use CacheListeners (still determining whether any
>>>>>>>> query is more complicated than select * from /otherRegion).  They
>>>>>>>> are hesitant because they are attached to CQs.
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> Does this sound reasonable to you?
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> Something that has caused us a bit of pain in the past is the fact
>>>>>>>> that one JVM can either be a Client or a Peer, but not both.  And you can’t
>>>>>>>> have multiple instances of ClientCache since it uses statics.  The
>>>>>>>> latter was a problem in our microservices architecture as each service has
>>>>>>>> its own client API, but each client API can’t have its own
>>>>>>>> ClientCache.  We worked around it by wrapping ClientCache and
>>>>>>>> making the wrapper API a singleton.  But there are still some gotchas, like
>>>>>>>> if two services use different PDX serialization configs, etc.
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> Is that something you have been thinking about fixing for the
>>>>>>>> future?  That is, making it so, in one JVM, you can have multiple
>>>>>>>> clients/peers?   With microservices becoming a bigger trend I think more
>>>>>>>> people will want that.
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> Thanks,
>>>>>>>>
>>>>>>>> -- Eric
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>
>

Re: Converting a client to peer

Posted by Eric Pederson <er...@gmail.com>.
Hi Anil - thanks, I will try that and get back to you.


-- Eric

On Mon, Oct 12, 2015 at 6:21 PM, Anilkumar Gingade <ag...@pivotal.io>
wrote:

> Are you looking at connecting client to multiple environments (servers in
> dev, UAT, prod...) and getting the events...If this is the case, one option
> to try is, create client connection pools to different environment and
> register CQs using those pools...(I haven't tried this, but I think its
> doable)...
>
> -Anil..
>
>
>
>
>
>
>
>
>
> On Mon, Oct 12, 2015 at 1:44 PM, Eric Pederson <er...@gmail.com> wrote:
>
>> Hi all -
>>
>> I logged https://issues.apache.org/jira/browse/GEODE-395 as a feature
>> request to support multiple Caches per JVM.  One thing I forgot in my
>> earlier email and is probably the biggest pain point with the current
>> limitation is the ability to connect to multiple environments at the same
>> time.  For example, we will to connect to UAT for most services, but we'll
>> want to point one service in particular to Dev for debugging, or maybe
>> point it to Prod to get some live data.
>>
>> Thanks,
>>
>>
>> -- Eric
>>
>> On Wed, Sep 30, 2015 at 11:37 AM, Eric Pederson <er...@gmail.com>
>> wrote:
>>
>>> Hi Barry -
>>>
>>> The CQs are on other regions and they are doing puts on the main Trade
>>> region.  The Trade region is Replicated in the cluster and the Trade Server
>>> has a CACHING_PROXY client region.
>>>
>>> Thanks for the tip on the the CacheListener queue monitoring.
>>>
>>>
>>> -- Eric
>>>
>>> On Tue, Sep 29, 2015 at 7:32 PM, Barry Oglesby <bo...@pivotal.io>
>>> wrote:
>>>
>>>> One thing I wanted to clarify is how you're loading the data in the
>>>> Trade Server client now. Are you doing puts from the CqListener into a
>>>> local region?
>>>>
>>>> Also, one thing to be careful about with asynchronous CacheListeners is
>>>> they tend to hide memory usage if the thread pool can't keep up with the
>>>> tasks being executed. At the very least, make sure to monitor the size of
>>>> the thread pool's backing queue.
>>>>
>>>> Barry Oglesby
>>>> GemFire Advanced Customer Engineering (ACE)
>>>> For immediate support please contact Pivotal Support at
>>>> http://support.pivotal.io/
>>>>
>>>>
>>>> On Tue, Sep 29, 2015 at 6:06 AM, Eric Pederson <er...@gmail.com>
>>>> wrote:
>>>>
>>>>> Thanks Barry.  That makes a lot of sense.  With power comes great
>>>>> responsibility... It sounds like we would want to have the CacheListener be
>>>>> asynchronous, adding events to a queue that that the application code pulls
>>>>> from.
>>>>>
>>>>>
>>>>> -- Eric
>>>>>
>>>>> On Mon, Sep 28, 2015 at 10:06 PM, Barry Oglesby <bo...@pivotal.io>
>>>>> wrote:
>>>>>
>>>>>> The big difference between a peer and a client is that the peer is a
>>>>>> member of the distributed system whereas the client is not. This means,
>>>>>> among other things, that CacheListener callbacks are synchronous with the
>>>>>> original operation whereas CqListener callbacks are not. When the Trade
>>>>>> Server peer is started, your application put performance may degrade
>>>>>> depending on what is done in the CacheListener callback.
>>>>>>
>>>>>> You'll have synchronous replication of data between the server and
>>>>>> peer as well, but if the client's queue is on a node remote to where the
>>>>>> operation occurs, then that is also a synchronous replication of data. So,
>>>>>> that more-or-less balances out.
>>>>>>
>>>>>> Also, the health of a Trade Server peer can affect the other
>>>>>> distributed system members to a greater degree than a client. For example,
>>>>>> operations being replicated to the Trade Server peer will be impacted if a
>>>>>> long GC is occurring in it.
>>>>>>
>>>>>>
>>>>>> Barry Oglesby
>>>>>> GemFire Advanced Customer Engineering (ACE)
>>>>>> For immediate support please contact Pivotal Support at
>>>>>> http://support.pivotal.io/
>>>>>>
>>>>>>
>>>>>> On Mon, Sep 28, 2015 at 3:33 PM, Eric Pederson <er...@gmail.com>
>>>>>> wrote:
>>>>>>
>>>>>>> Thanks for the answers to my previous question about getting a
>>>>>>> callback if the cluster goes down.  We decided to go with
>>>>>>> EndpointListener in the short term as we’re still on Gemfire 7.0.2
>>>>>>> (I forgot to mention that).  We’re going to upgrade soon though and then
>>>>>>> we’ll move to ClientMembershipListener as it’s a public API.
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> I have some related questions – here’s some background:  We have a
>>>>>>> cluster of Gemfire servers and a number of Replicated regions.  We have a
>>>>>>> microservice architecture where all of our applications are publishers for
>>>>>>> some regions and clients for other regions.  We use CQs for most if not all
>>>>>>> of the client scenarios.  Because of the CQ requirement all of our
>>>>>>> applications are clients.
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> In one of these applications (called Trade Server) we would like to
>>>>>>> avoid needing to have it reload its region in the cluster if the cluster
>>>>>>> goes down completely and comes back up.  I discussed with my colleagues the
>>>>>>> possibility of making the Trade Server a peer instead of a client.  It
>>>>>>> could be a replica for its region and then it would not be impacted if the
>>>>>>> main cluster went down.  And then when the cluster came back up Trade
>>>>>>> Server would replicate its data back to it.  The only glitch is that it is
>>>>>>> a client for other regions.  I told them that instead of using CQs in Trade
>>>>>>> Server we could use CacheListeners (still determining whether any
>>>>>>> query is more complicated than select * from /otherRegion).  They
>>>>>>> are hesitant because they are attached to CQs.
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> Does this sound reasonable to you?
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> Something that has caused us a bit of pain in the past is the fact
>>>>>>> that one JVM can either be a Client or a Peer, but not both.  And you can’t
>>>>>>> have multiple instances of ClientCache since it uses statics.  The
>>>>>>> latter was a problem in our microservices architecture as each service has
>>>>>>> its own client API, but each client API can’t have its own
>>>>>>> ClientCache.  We worked around it by wrapping ClientCache and
>>>>>>> making the wrapper API a singleton.  But there are still some gotchas, like
>>>>>>> if two services use different PDX serialization configs, etc.
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> Is that something you have been thinking about fixing for the
>>>>>>> future?  That is, making it so, in one JVM, you can have multiple
>>>>>>> clients/peers?   With microservices becoming a bigger trend I think more
>>>>>>> people will want that.
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> Thanks,
>>>>>>>
>>>>>>> -- Eric
>>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>
>

Re: Converting a client to peer

Posted by Anilkumar Gingade <ag...@pivotal.io>.
Are you looking at connecting client to multiple environments (servers in
dev, UAT, prod...) and getting the events...If this is the case, one option
to try is, create client connection pools to different environment and
register CQs using those pools...(I haven't tried this, but I think its
doable)...

-Anil..









On Mon, Oct 12, 2015 at 1:44 PM, Eric Pederson <er...@gmail.com> wrote:

> Hi all -
>
> I logged https://issues.apache.org/jira/browse/GEODE-395 as a feature
> request to support multiple Caches per JVM.  One thing I forgot in my
> earlier email and is probably the biggest pain point with the current
> limitation is the ability to connect to multiple environments at the same
> time.  For example, we will to connect to UAT for most services, but we'll
> want to point one service in particular to Dev for debugging, or maybe
> point it to Prod to get some live data.
>
> Thanks,
>
>
> -- Eric
>
> On Wed, Sep 30, 2015 at 11:37 AM, Eric Pederson <er...@gmail.com> wrote:
>
>> Hi Barry -
>>
>> The CQs are on other regions and they are doing puts on the main Trade
>> region.  The Trade region is Replicated in the cluster and the Trade Server
>> has a CACHING_PROXY client region.
>>
>> Thanks for the tip on the the CacheListener queue monitoring.
>>
>>
>> -- Eric
>>
>> On Tue, Sep 29, 2015 at 7:32 PM, Barry Oglesby <bo...@pivotal.io>
>> wrote:
>>
>>> One thing I wanted to clarify is how you're loading the data in the
>>> Trade Server client now. Are you doing puts from the CqListener into a
>>> local region?
>>>
>>> Also, one thing to be careful about with asynchronous CacheListeners is
>>> they tend to hide memory usage if the thread pool can't keep up with the
>>> tasks being executed. At the very least, make sure to monitor the size of
>>> the thread pool's backing queue.
>>>
>>> Barry Oglesby
>>> GemFire Advanced Customer Engineering (ACE)
>>> For immediate support please contact Pivotal Support at
>>> http://support.pivotal.io/
>>>
>>>
>>> On Tue, Sep 29, 2015 at 6:06 AM, Eric Pederson <er...@gmail.com>
>>> wrote:
>>>
>>>> Thanks Barry.  That makes a lot of sense.  With power comes great
>>>> responsibility... It sounds like we would want to have the CacheListener be
>>>> asynchronous, adding events to a queue that that the application code pulls
>>>> from.
>>>>
>>>>
>>>> -- Eric
>>>>
>>>> On Mon, Sep 28, 2015 at 10:06 PM, Barry Oglesby <bo...@pivotal.io>
>>>> wrote:
>>>>
>>>>> The big difference between a peer and a client is that the peer is a
>>>>> member of the distributed system whereas the client is not. This means,
>>>>> among other things, that CacheListener callbacks are synchronous with the
>>>>> original operation whereas CqListener callbacks are not. When the Trade
>>>>> Server peer is started, your application put performance may degrade
>>>>> depending on what is done in the CacheListener callback.
>>>>>
>>>>> You'll have synchronous replication of data between the server and
>>>>> peer as well, but if the client's queue is on a node remote to where the
>>>>> operation occurs, then that is also a synchronous replication of data. So,
>>>>> that more-or-less balances out.
>>>>>
>>>>> Also, the health of a Trade Server peer can affect the other
>>>>> distributed system members to a greater degree than a client. For example,
>>>>> operations being replicated to the Trade Server peer will be impacted if a
>>>>> long GC is occurring in it.
>>>>>
>>>>>
>>>>> Barry Oglesby
>>>>> GemFire Advanced Customer Engineering (ACE)
>>>>> For immediate support please contact Pivotal Support at
>>>>> http://support.pivotal.io/
>>>>>
>>>>>
>>>>> On Mon, Sep 28, 2015 at 3:33 PM, Eric Pederson <er...@gmail.com>
>>>>> wrote:
>>>>>
>>>>>> Thanks for the answers to my previous question about getting a
>>>>>> callback if the cluster goes down.  We decided to go with
>>>>>> EndpointListener in the short term as we’re still on Gemfire 7.0.2
>>>>>> (I forgot to mention that).  We’re going to upgrade soon though and then
>>>>>> we’ll move to ClientMembershipListener as it’s a public API.
>>>>>>
>>>>>>
>>>>>>
>>>>>> I have some related questions – here’s some background:  We have a
>>>>>> cluster of Gemfire servers and a number of Replicated regions.  We have a
>>>>>> microservice architecture where all of our applications are publishers for
>>>>>> some regions and clients for other regions.  We use CQs for most if not all
>>>>>> of the client scenarios.  Because of the CQ requirement all of our
>>>>>> applications are clients.
>>>>>>
>>>>>>
>>>>>>
>>>>>> In one of these applications (called Trade Server) we would like to
>>>>>> avoid needing to have it reload its region in the cluster if the cluster
>>>>>> goes down completely and comes back up.  I discussed with my colleagues the
>>>>>> possibility of making the Trade Server a peer instead of a client.  It
>>>>>> could be a replica for its region and then it would not be impacted if the
>>>>>> main cluster went down.  And then when the cluster came back up Trade
>>>>>> Server would replicate its data back to it.  The only glitch is that it is
>>>>>> a client for other regions.  I told them that instead of using CQs in Trade
>>>>>> Server we could use CacheListeners (still determining whether any
>>>>>> query is more complicated than select * from /otherRegion).  They
>>>>>> are hesitant because they are attached to CQs.
>>>>>>
>>>>>>
>>>>>>
>>>>>> Does this sound reasonable to you?
>>>>>>
>>>>>>
>>>>>>
>>>>>> Something that has caused us a bit of pain in the past is the fact
>>>>>> that one JVM can either be a Client or a Peer, but not both.  And you can’t
>>>>>> have multiple instances of ClientCache since it uses statics.  The
>>>>>> latter was a problem in our microservices architecture as each service has
>>>>>> its own client API, but each client API can’t have its own
>>>>>> ClientCache.  We worked around it by wrapping ClientCache and making
>>>>>> the wrapper API a singleton.  But there are still some gotchas, like if two
>>>>>> services use different PDX serialization configs, etc.
>>>>>>
>>>>>>
>>>>>>
>>>>>> Is that something you have been thinking about fixing for the
>>>>>> future?  That is, making it so, in one JVM, you can have multiple
>>>>>> clients/peers?   With microservices becoming a bigger trend I think more
>>>>>> people will want that.
>>>>>>
>>>>>>
>>>>>>
>>>>>> Thanks,
>>>>>>
>>>>>> -- Eric
>>>>>>
>>>>>
>>>>>
>>>>
>>>
>>
>

Re: Converting a client to peer

Posted by Eric Pederson <er...@gmail.com>.
Hi all -

I logged https://issues.apache.org/jira/browse/GEODE-395 as a feature
request to support multiple Caches per JVM.  One thing I forgot in my
earlier email and is probably the biggest pain point with the current
limitation is the ability to connect to multiple environments at the same
time.  For example, we will to connect to UAT for most services, but we'll
want to point one service in particular to Dev for debugging, or maybe
point it to Prod to get some live data.

Thanks,


-- Eric

On Wed, Sep 30, 2015 at 11:37 AM, Eric Pederson <er...@gmail.com> wrote:

> Hi Barry -
>
> The CQs are on other regions and they are doing puts on the main Trade
> region.  The Trade region is Replicated in the cluster and the Trade Server
> has a CACHING_PROXY client region.
>
> Thanks for the tip on the the CacheListener queue monitoring.
>
>
> -- Eric
>
> On Tue, Sep 29, 2015 at 7:32 PM, Barry Oglesby <bo...@pivotal.io>
> wrote:
>
>> One thing I wanted to clarify is how you're loading the data in the Trade
>> Server client now. Are you doing puts from the CqListener into a local
>> region?
>>
>> Also, one thing to be careful about with asynchronous CacheListeners is
>> they tend to hide memory usage if the thread pool can't keep up with the
>> tasks being executed. At the very least, make sure to monitor the size of
>> the thread pool's backing queue.
>>
>> Barry Oglesby
>> GemFire Advanced Customer Engineering (ACE)
>> For immediate support please contact Pivotal Support at
>> http://support.pivotal.io/
>>
>>
>> On Tue, Sep 29, 2015 at 6:06 AM, Eric Pederson <er...@gmail.com> wrote:
>>
>>> Thanks Barry.  That makes a lot of sense.  With power comes great
>>> responsibility... It sounds like we would want to have the CacheListener be
>>> asynchronous, adding events to a queue that that the application code pulls
>>> from.
>>>
>>>
>>> -- Eric
>>>
>>> On Mon, Sep 28, 2015 at 10:06 PM, Barry Oglesby <bo...@pivotal.io>
>>> wrote:
>>>
>>>> The big difference between a peer and a client is that the peer is a
>>>> member of the distributed system whereas the client is not. This means,
>>>> among other things, that CacheListener callbacks are synchronous with the
>>>> original operation whereas CqListener callbacks are not. When the Trade
>>>> Server peer is started, your application put performance may degrade
>>>> depending on what is done in the CacheListener callback.
>>>>
>>>> You'll have synchronous replication of data between the server and peer
>>>> as well, but if the client's queue is on a node remote to where the
>>>> operation occurs, then that is also a synchronous replication of data. So,
>>>> that more-or-less balances out.
>>>>
>>>> Also, the health of a Trade Server peer can affect the other
>>>> distributed system members to a greater degree than a client. For example,
>>>> operations being replicated to the Trade Server peer will be impacted if a
>>>> long GC is occurring in it.
>>>>
>>>>
>>>> Barry Oglesby
>>>> GemFire Advanced Customer Engineering (ACE)
>>>> For immediate support please contact Pivotal Support at
>>>> http://support.pivotal.io/
>>>>
>>>>
>>>> On Mon, Sep 28, 2015 at 3:33 PM, Eric Pederson <er...@gmail.com>
>>>> wrote:
>>>>
>>>>> Thanks for the answers to my previous question about getting a
>>>>> callback if the cluster goes down.  We decided to go with
>>>>> EndpointListener in the short term as we’re still on Gemfire 7.0.2 (I
>>>>> forgot to mention that).  We’re going to upgrade soon though and then we’ll
>>>>> move to ClientMembershipListener as it’s a public API.
>>>>>
>>>>>
>>>>>
>>>>> I have some related questions – here’s some background:  We have a
>>>>> cluster of Gemfire servers and a number of Replicated regions.  We have a
>>>>> microservice architecture where all of our applications are publishers for
>>>>> some regions and clients for other regions.  We use CQs for most if not all
>>>>> of the client scenarios.  Because of the CQ requirement all of our
>>>>> applications are clients.
>>>>>
>>>>>
>>>>>
>>>>> In one of these applications (called Trade Server) we would like to
>>>>> avoid needing to have it reload its region in the cluster if the cluster
>>>>> goes down completely and comes back up.  I discussed with my colleagues the
>>>>> possibility of making the Trade Server a peer instead of a client.  It
>>>>> could be a replica for its region and then it would not be impacted if the
>>>>> main cluster went down.  And then when the cluster came back up Trade
>>>>> Server would replicate its data back to it.  The only glitch is that it is
>>>>> a client for other regions.  I told them that instead of using CQs in Trade
>>>>> Server we could use CacheListeners (still determining whether any
>>>>> query is more complicated than select * from /otherRegion).  They are
>>>>> hesitant because they are attached to CQs.
>>>>>
>>>>>
>>>>>
>>>>> Does this sound reasonable to you?
>>>>>
>>>>>
>>>>>
>>>>> Something that has caused us a bit of pain in the past is the fact
>>>>> that one JVM can either be a Client or a Peer, but not both.  And you can’t
>>>>> have multiple instances of ClientCache since it uses statics.  The
>>>>> latter was a problem in our microservices architecture as each service has
>>>>> its own client API, but each client API can’t have its own ClientCache.
>>>>> We worked around it by wrapping ClientCache and making the wrapper
>>>>> API a singleton.  But there are still some gotchas, like if two services
>>>>> use different PDX serialization configs, etc.
>>>>>
>>>>>
>>>>>
>>>>> Is that something you have been thinking about fixing for the future?
>>>>> That is, making it so, in one JVM, you can have multiple clients/peers?
>>>>> With microservices becoming a bigger trend I think more people will want
>>>>> that.
>>>>>
>>>>>
>>>>>
>>>>> Thanks,
>>>>>
>>>>> -- Eric
>>>>>
>>>>
>>>>
>>>
>>
>

Re: Converting a client to peer

Posted by Eric Pederson <er...@gmail.com>.
Hi Barry -

The CQs are on other regions and they are doing puts on the main Trade
region.  The Trade region is Replicated in the cluster and the Trade Server
has a CACHING_PROXY client region.

Thanks for the tip on the the CacheListener queue monitoring.


-- Eric

On Tue, Sep 29, 2015 at 7:32 PM, Barry Oglesby <bo...@pivotal.io> wrote:

> One thing I wanted to clarify is how you're loading the data in the Trade
> Server client now. Are you doing puts from the CqListener into a local
> region?
>
> Also, one thing to be careful about with asynchronous CacheListeners is
> they tend to hide memory usage if the thread pool can't keep up with the
> tasks being executed. At the very least, make sure to monitor the size of
> the thread pool's backing queue.
>
> Barry Oglesby
> GemFire Advanced Customer Engineering (ACE)
> For immediate support please contact Pivotal Support at
> http://support.pivotal.io/
>
>
> On Tue, Sep 29, 2015 at 6:06 AM, Eric Pederson <er...@gmail.com> wrote:
>
>> Thanks Barry.  That makes a lot of sense.  With power comes great
>> responsibility... It sounds like we would want to have the CacheListener be
>> asynchronous, adding events to a queue that that the application code pulls
>> from.
>>
>>
>> -- Eric
>>
>> On Mon, Sep 28, 2015 at 10:06 PM, Barry Oglesby <bo...@pivotal.io>
>> wrote:
>>
>>> The big difference between a peer and a client is that the peer is a
>>> member of the distributed system whereas the client is not. This means,
>>> among other things, that CacheListener callbacks are synchronous with the
>>> original operation whereas CqListener callbacks are not. When the Trade
>>> Server peer is started, your application put performance may degrade
>>> depending on what is done in the CacheListener callback.
>>>
>>> You'll have synchronous replication of data between the server and peer
>>> as well, but if the client's queue is on a node remote to where the
>>> operation occurs, then that is also a synchronous replication of data. So,
>>> that more-or-less balances out.
>>>
>>> Also, the health of a Trade Server peer can affect the other distributed
>>> system members to a greater degree than a client. For example, operations
>>> being replicated to the Trade Server peer will be impacted if a long GC is
>>> occurring in it.
>>>
>>>
>>> Barry Oglesby
>>> GemFire Advanced Customer Engineering (ACE)
>>> For immediate support please contact Pivotal Support at
>>> http://support.pivotal.io/
>>>
>>>
>>> On Mon, Sep 28, 2015 at 3:33 PM, Eric Pederson <er...@gmail.com>
>>> wrote:
>>>
>>>> Thanks for the answers to my previous question about getting a callback
>>>> if the cluster goes down.  We decided to go with EndpointListener in
>>>> the short term as we’re still on Gemfire 7.0.2 (I forgot to mention that).
>>>> We’re going to upgrade soon though and then we’ll move to
>>>> ClientMembershipListener as it’s a public API.
>>>>
>>>>
>>>>
>>>> I have some related questions – here’s some background:  We have a
>>>> cluster of Gemfire servers and a number of Replicated regions.  We have a
>>>> microservice architecture where all of our applications are publishers for
>>>> some regions and clients for other regions.  We use CQs for most if not all
>>>> of the client scenarios.  Because of the CQ requirement all of our
>>>> applications are clients.
>>>>
>>>>
>>>>
>>>> In one of these applications (called Trade Server) we would like to
>>>> avoid needing to have it reload its region in the cluster if the cluster
>>>> goes down completely and comes back up.  I discussed with my colleagues the
>>>> possibility of making the Trade Server a peer instead of a client.  It
>>>> could be a replica for its region and then it would not be impacted if the
>>>> main cluster went down.  And then when the cluster came back up Trade
>>>> Server would replicate its data back to it.  The only glitch is that it is
>>>> a client for other regions.  I told them that instead of using CQs in Trade
>>>> Server we could use CacheListeners (still determining whether any
>>>> query is more complicated than select * from /otherRegion).  They are
>>>> hesitant because they are attached to CQs.
>>>>
>>>>
>>>>
>>>> Does this sound reasonable to you?
>>>>
>>>>
>>>>
>>>> Something that has caused us a bit of pain in the past is the fact that
>>>> one JVM can either be a Client or a Peer, but not both.  And you can’t have
>>>> multiple instances of ClientCache since it uses statics.  The latter
>>>> was a problem in our microservices architecture as each service has its own
>>>> client API, but each client API can’t have its own ClientCache.  We
>>>> worked around it by wrapping ClientCache and making the wrapper API a
>>>> singleton.  But there are still some gotchas, like if two services use
>>>> different PDX serialization configs, etc.
>>>>
>>>>
>>>>
>>>> Is that something you have been thinking about fixing for the future?
>>>> That is, making it so, in one JVM, you can have multiple clients/peers?
>>>> With microservices becoming a bigger trend I think more people will want
>>>> that.
>>>>
>>>>
>>>>
>>>> Thanks,
>>>>
>>>> -- Eric
>>>>
>>>
>>>
>>
>

Re: Converting a client to peer

Posted by Barry Oglesby <bo...@pivotal.io>.
One thing I wanted to clarify is how you're loading the data in the Trade
Server client now. Are you doing puts from the CqListener into a local
region?

Also, one thing to be careful about with asynchronous CacheListeners is
they tend to hide memory usage if the thread pool can't keep up with the
tasks being executed. At the very least, make sure to monitor the size of
the thread pool's backing queue.

Barry Oglesby
GemFire Advanced Customer Engineering (ACE)
For immediate support please contact Pivotal Support at
http://support.pivotal.io/


On Tue, Sep 29, 2015 at 6:06 AM, Eric Pederson <er...@gmail.com> wrote:

> Thanks Barry.  That makes a lot of sense.  With power comes great
> responsibility... It sounds like we would want to have the CacheListener be
> asynchronous, adding events to a queue that that the application code pulls
> from.
>
>
> -- Eric
>
> On Mon, Sep 28, 2015 at 10:06 PM, Barry Oglesby <bo...@pivotal.io>
> wrote:
>
>> The big difference between a peer and a client is that the peer is a
>> member of the distributed system whereas the client is not. This means,
>> among other things, that CacheListener callbacks are synchronous with the
>> original operation whereas CqListener callbacks are not. When the Trade
>> Server peer is started, your application put performance may degrade
>> depending on what is done in the CacheListener callback.
>>
>> You'll have synchronous replication of data between the server and peer
>> as well, but if the client's queue is on a node remote to where the
>> operation occurs, then that is also a synchronous replication of data. So,
>> that more-or-less balances out.
>>
>> Also, the health of a Trade Server peer can affect the other distributed
>> system members to a greater degree than a client. For example, operations
>> being replicated to the Trade Server peer will be impacted if a long GC is
>> occurring in it.
>>
>>
>> Barry Oglesby
>> GemFire Advanced Customer Engineering (ACE)
>> For immediate support please contact Pivotal Support at
>> http://support.pivotal.io/
>>
>>
>> On Mon, Sep 28, 2015 at 3:33 PM, Eric Pederson <er...@gmail.com> wrote:
>>
>>> Thanks for the answers to my previous question about getting a callback
>>> if the cluster goes down.  We decided to go with EndpointListener in
>>> the short term as we’re still on Gemfire 7.0.2 (I forgot to mention that).
>>> We’re going to upgrade soon though and then we’ll move to
>>> ClientMembershipListener as it’s a public API.
>>>
>>>
>>>
>>> I have some related questions – here’s some background:  We have a
>>> cluster of Gemfire servers and a number of Replicated regions.  We have a
>>> microservice architecture where all of our applications are publishers for
>>> some regions and clients for other regions.  We use CQs for most if not all
>>> of the client scenarios.  Because of the CQ requirement all of our
>>> applications are clients.
>>>
>>>
>>>
>>> In one of these applications (called Trade Server) we would like to
>>> avoid needing to have it reload its region in the cluster if the cluster
>>> goes down completely and comes back up.  I discussed with my colleagues the
>>> possibility of making the Trade Server a peer instead of a client.  It
>>> could be a replica for its region and then it would not be impacted if the
>>> main cluster went down.  And then when the cluster came back up Trade
>>> Server would replicate its data back to it.  The only glitch is that it is
>>> a client for other regions.  I told them that instead of using CQs in Trade
>>> Server we could use CacheListeners (still determining whether any query
>>> is more complicated than select * from /otherRegion).  They are
>>> hesitant because they are attached to CQs.
>>>
>>>
>>>
>>> Does this sound reasonable to you?
>>>
>>>
>>>
>>> Something that has caused us a bit of pain in the past is the fact that
>>> one JVM can either be a Client or a Peer, but not both.  And you can’t have
>>> multiple instances of ClientCache since it uses statics.  The latter
>>> was a problem in our microservices architecture as each service has its own
>>> client API, but each client API can’t have its own ClientCache.  We
>>> worked around it by wrapping ClientCache and making the wrapper API a
>>> singleton.  But there are still some gotchas, like if two services use
>>> different PDX serialization configs, etc.
>>>
>>>
>>>
>>> Is that something you have been thinking about fixing for the future?
>>> That is, making it so, in one JVM, you can have multiple clients/peers?
>>> With microservices becoming a bigger trend I think more people will want
>>> that.
>>>
>>>
>>>
>>> Thanks,
>>>
>>> -- Eric
>>>
>>
>>
>

Re: Converting a client to peer

Posted by Eric Pederson <er...@gmail.com>.
Thanks Barry.  That makes a lot of sense.  With power comes great
responsibility... It sounds like we would want to have the CacheListener be
asynchronous, adding events to a queue that that the application code pulls
from.


-- Eric

On Mon, Sep 28, 2015 at 10:06 PM, Barry Oglesby <bo...@pivotal.io> wrote:

> The big difference between a peer and a client is that the peer is a
> member of the distributed system whereas the client is not. This means,
> among other things, that CacheListener callbacks are synchronous with the
> original operation whereas CqListener callbacks are not. When the Trade
> Server peer is started, your application put performance may degrade
> depending on what is done in the CacheListener callback.
>
> You'll have synchronous replication of data between the server and peer as
> well, but if the client's queue is on a node remote to where the operation
> occurs, then that is also a synchronous replication of data. So, that
> more-or-less balances out.
>
> Also, the health of a Trade Server peer can affect the other distributed
> system members to a greater degree than a client. For example, operations
> being replicated to the Trade Server peer will be impacted if a long GC is
> occurring in it.
>
>
> Barry Oglesby
> GemFire Advanced Customer Engineering (ACE)
> For immediate support please contact Pivotal Support at
> http://support.pivotal.io/
>
>
> On Mon, Sep 28, 2015 at 3:33 PM, Eric Pederson <er...@gmail.com> wrote:
>
>> Thanks for the answers to my previous question about getting a callback
>> if the cluster goes down.  We decided to go with EndpointListener in the
>> short term as we’re still on Gemfire 7.0.2 (I forgot to mention that).
>> We’re going to upgrade soon though and then we’ll move to
>> ClientMembershipListener as it’s a public API.
>>
>>
>>
>> I have some related questions – here’s some background:  We have a
>> cluster of Gemfire servers and a number of Replicated regions.  We have a
>> microservice architecture where all of our applications are publishers for
>> some regions and clients for other regions.  We use CQs for most if not all
>> of the client scenarios.  Because of the CQ requirement all of our
>> applications are clients.
>>
>>
>>
>> In one of these applications (called Trade Server) we would like to avoid
>> needing to have it reload its region in the cluster if the cluster goes
>> down completely and comes back up.  I discussed with my colleagues the
>> possibility of making the Trade Server a peer instead of a client.  It
>> could be a replica for its region and then it would not be impacted if the
>> main cluster went down.  And then when the cluster came back up Trade
>> Server would replicate its data back to it.  The only glitch is that it is
>> a client for other regions.  I told them that instead of using CQs in Trade
>> Server we could use CacheListeners (still determining whether any query
>> is more complicated than select * from /otherRegion).  They are hesitant
>> because they are attached to CQs.
>>
>>
>>
>> Does this sound reasonable to you?
>>
>>
>>
>> Something that has caused us a bit of pain in the past is the fact that
>> one JVM can either be a Client or a Peer, but not both.  And you can’t have
>> multiple instances of ClientCache since it uses statics.  The latter was
>> a problem in our microservices architecture as each service has its own
>> client API, but each client API can’t have its own ClientCache.  We
>> worked around it by wrapping ClientCache and making the wrapper API a
>> singleton.  But there are still some gotchas, like if two services use
>> different PDX serialization configs, etc.
>>
>>
>>
>> Is that something you have been thinking about fixing for the future?
>> That is, making it so, in one JVM, you can have multiple clients/peers?
>> With microservices becoming a bigger trend I think more people will want
>> that.
>>
>>
>>
>> Thanks,
>>
>> -- Eric
>>
>
>

Re: Converting a client to peer

Posted by Barry Oglesby <bo...@pivotal.io>.
The big difference between a peer and a client is that the peer is a member
of the distributed system whereas the client is not. This means, among
other things, that CacheListener callbacks are synchronous with the
original operation whereas CqListener callbacks are not. When the Trade
Server peer is started, your application put performance may degrade
depending on what is done in the CacheListener callback.

You'll have synchronous replication of data between the server and peer as
well, but if the client's queue is on a node remote to where the operation
occurs, then that is also a synchronous replication of data. So, that
more-or-less balances out.

Also, the health of a Trade Server peer can affect the other distributed
system members to a greater degree than a client. For example, operations
being replicated to the Trade Server peer will be impacted if a long GC is
occurring in it.


Barry Oglesby
GemFire Advanced Customer Engineering (ACE)
For immediate support please contact Pivotal Support at
http://support.pivotal.io/


On Mon, Sep 28, 2015 at 3:33 PM, Eric Pederson <er...@gmail.com> wrote:

> Thanks for the answers to my previous question about getting a callback if
> the cluster goes down.  We decided to go with EndpointListener in the
> short term as we’re still on Gemfire 7.0.2 (I forgot to mention that).
> We’re going to upgrade soon though and then we’ll move to
> ClientMembershipListener as it’s a public API.
>
>
>
> I have some related questions – here’s some background:  We have a cluster
> of Gemfire servers and a number of Replicated regions.  We have a
> microservice architecture where all of our applications are publishers for
> some regions and clients for other regions.  We use CQs for most if not all
> of the client scenarios.  Because of the CQ requirement all of our
> applications are clients.
>
>
>
> In one of these applications (called Trade Server) we would like to avoid
> needing to have it reload its region in the cluster if the cluster goes
> down completely and comes back up.  I discussed with my colleagues the
> possibility of making the Trade Server a peer instead of a client.  It
> could be a replica for its region and then it would not be impacted if the
> main cluster went down.  And then when the cluster came back up Trade
> Server would replicate its data back to it.  The only glitch is that it is
> a client for other regions.  I told them that instead of using CQs in Trade
> Server we could use CacheListeners (still determining whether any query
> is more complicated than select * from /otherRegion).  They are hesitant
> because they are attached to CQs.
>
>
>
> Does this sound reasonable to you?
>
>
>
> Something that has caused us a bit of pain in the past is the fact that
> one JVM can either be a Client or a Peer, but not both.  And you can’t have
> multiple instances of ClientCache since it uses statics.  The latter was
> a problem in our microservices architecture as each service has its own
> client API, but each client API can’t have its own ClientCache.  We
> worked around it by wrapping ClientCache and making the wrapper API a
> singleton.  But there are still some gotchas, like if two services use
> different PDX serialization configs, etc.
>
>
>
> Is that something you have been thinking about fixing for the future?
> That is, making it so, in one JVM, you can have multiple clients/peers?
> With microservices becoming a bigger trend I think more people will want
> that.
>
>
>
> Thanks,
>
> -- Eric
>