You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@geode.apache.org by Mark Bretl <as...@gmail.com> on 2016/06/17 18:04:30 UTC

Re: com.gemstone.gemfire.ForcedDisconnectException: Member isn't responding to heartbeat requests

+ user@geode.i.a.o

On Fri, Jun 17, 2016 at 8:49 AM, Anthony Baker <ab...@pivotal.io> wrote:

> I would start by turning on GC logging and see if you have any long
> collections.  A stop-the-world collection that takes longer than the member
> timeout could cause the behavior you are observing.
>
> You may want to consider using an off-heap region.  Also check out the
> other eviction algorithms (entry count, region size) to see if those would
> be appropriate.
>
> Anthony
>
> > On Jun 17, 2016, at 8:33 AM, Avinash Dongre <do...@gmail.com>
> wrote:
> >
> > Thanks Anthony
> >
> > *>>>>  At the CRITICAL threshold further writes are blocked. *
> >
> > Does client get any kind of warning/error or does put or putAll fails.
> > My client is just putting KVs and not aware the server State. and I  am
> > getting
> >
> > com.gemstone.gemfire.ForcedDisconnectException: Member isn't responding
> to
> > heartbeat requests, and after this server is attempting to reconnect.
> >
> > Thanks
> > Avinash
> >
> >
> >
> > On Fri, Jun 17, 2016 at 7:50 PM, Anthony Baker <ab...@pivotal.io>
> wrote:
> >
> >> You need enough heap to retain all your keys in memory.  Values begin to
> >> be evicted at the EVICTION threshold.  At the CRITICAL threshold further
> >> writes are blocked.  Your GC settings should be tuned to match these
> >> thresholds (search the mailing list archives and/or wiki for some
> advice on
> >> GC tuning).
> >>
> >> Anthony
> >>
> >>> On Jun 17, 2016, at 6:42 AM, Avinash Dongre <do...@gmail.com>
> >> wrote:
> >>>
> >>> Thanks Anthony,
> >>>
> >>> I know that my heap is not sufficient for the data I want to put.
> >>> but My Region is configured as PARTITION_OVERFLOW, so I was hoping that
> >>> once Geode reaches to critical heap , it will start overflowing
> >>> the data.
> >>>
> >>> Is this assumption correct ?
> >>>
> >>> thanks
> >>> Avinash
> >>>
> >>>
> >>> On Thu, Jun 16, 2016 at 8:21 PM, Anthony Baker <ab...@pivotal.io>
> >> wrote:
> >>>
> >>>> Hi Avinash,
> >>>>
> >>>> The question to answer is “Why was a member removed from the cluster?”
> >>>> Some things to investigate:
> >>>>
> >>>> - Insufficient heap for the data volume
> >>>> - Excessive GC causing the member to be unresponsive
> >>>> - OutOfMemory errors in the log
> >>>> - Overloaded CPU causing delayed heartbeats responses
> >>>>
> >>>> HTH,
> >>>> Anthony
> >>>>
> >>>>> On Jun 16, 2016, at 6:48 AM, Avinash Dongre <
> dongre.avinash@gmail.com>
> >>>> wrote:
> >>>>>
> >>>>> Hello All,
> >>>>>
> >>>>> I am getting following exception when I try to load my system with
> >> large
> >>>>> amount of data.
> >>>>>
> >>>>> My Setup Details:
> >>>>> 1 locator, 3 cacheservers with 8g and all the regions are disk
> >>>> persistence
> >>>>> enabled. ( All this is running on single AWS cluster node )
> >>>>>
> >>>>> Please give me some clues what I am missing here ?
> >>>>>
> >>>>>
> >>>>> [severe 2016/06/16 12:51:15.552 UTC S1 <Notification Handler>
> tid=0x40]
> >>>>> Uncaught exception in thread Thread[Notification
> >>>>> Handler,10,ResourceListenerInvokerThreadGroup]
> >>>>>
> >> com.gemstone.gemfire.distributed.DistributedSystemDisconnectedException:
> >>>>> DistributedSystem is shutting down, caused by
> >>>>> com.gemstone.gemfire.ForcedDisconnectException: Member isn't
> responding
> >>>> to
> >>>>> heartbeat requests
> >>>>>      at
> >>>>>
> >>>>
> >>
> com.gemstone.gemfire.distributed.internal.membership.gms.mgr.GMSMembershipManager.directChannelSend(GMSMembershipManager.java:1719)
> >>>>>      at
> >>>>>
> >>>>
> >>
> com.gemstone.gemfire.distributed.internal.membership.gms.mgr.GMSMembershipManager.send(GMSMembershipManager.java:1897)
> >>>>>      at
> >>>>>
> >>>>
> >>
> com.gemstone.gemfire.distributed.internal.DistributionChannel.send(DistributionChannel.java:87)
> >>>>>      at
> >>>>>
> >>>>
> >>
> com.gemstone.gemfire.distributed.internal.DistributionManager.sendOutgoing(DistributionManager.java:3427)
> >>>>>      at
> >>>>>
> >>>>
> >>
> com.gemstone.gemfire.distributed.internal.DistributionManager.sendMessage(DistributionManager.java:3468)
> >>>>>      at
> >>>>>
> >>>>
> >>
> com.gemstone.gemfire.distributed.internal.DistributionManager.putOutgoing(DistributionManager.java:1828)
> >>>>>      at
> >>>>>
> >>>>
> >>
> com.gemstone.gemfire.internal.cache.control.ResourceAdvisor$ResourceProfileMessage.send(ResourceAdvisor.java:185)
> >>>>>      at
> >>>>>
> >>>>
> >>
> com.gemstone.gemfire.internal.cache.control.ResourceAdvisor.updateRemoteProfile(ResourceAdvisor.java:448)
> >>>>>      at
> >>>>>
> >>>>
> >>
> com.gemstone.gemfire.internal.cache.control.HeapMemoryMonitor.processLocalEvent(HeapMemoryMonitor.java:677)
> >>>>>      at
> >>>>>
> >>>>
> >>
> com.gemstone.gemfire.internal.cache.control.HeapMemoryMonitor.updateStateAndSendEvent(HeapMemoryMonitor.java:485)
> >>>>>      at
> >>>>>
> >>>>
> >>
> com.gemstone.gemfire.internal.cache.control.HeapMemoryMonitor.updateStateAndSendEvent(HeapMemoryMonitor.java:448)
> >>>>>      at
> >>>>>
> >>>>
> >>
> com.gemstone.gemfire.internal.cache.control.HeapMemoryMonitor$2.run(HeapMemoryMonitor.java:718)
> >>>>>      at
> >>>>>
> >>>>
> >>
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> >>>>>      at
> >>>>>
> >>>>
> >>
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> >>>>>      at java.lang.Thread.run(Thread.java:745)
> >>>>> Caused by: com.gemstone.gemfire.ForcedDisconnectException: Member
> isn't
> >>>>> responding to heartbeat requests
> >>>>>      at
> >>>>>
> >>>>
> >>
> com.gemstone.gemfire.distributed.internal.membership.gms.mgr.GMSMembershipManager.forceDisconnect(GMSMembershipManager.java:2551)
> >>>>>      at
> >>>>>
> >>>>
> >>
> com.gemstone.gemfire.distributed.internal.membership.gms.membership.GMSJoinLeave.forceDisconnect(GMSJoinLeave.java:885)
> >>>>>      at
> >>>>>
> >>>>
> >>
> com.gemstone.gemfire.distributed.internal.membership.gms.membership.GMSJoinLeave.processRemoveRequest(GMSJoinLeave.java:578)
> >>>>>      at
> >>>>>
> >>>>
> >>
> com.gemstone.gemfire.distributed.internal.membership.gms.membership.GMSJoinLeave.processMessage(GMSJoinLeave.java:1540)
> >>>>>      at
> >>>>>
> >>>>
> >>
> com.gemstone.gemfire.distributed.internal.membership.gms.messenger.JGroupsMessenger$JGroupsReceiver.receive(JGroupsMessenger.java:1061)
> >>>>>      at org.jgroups.JChannel.invokeCallback(JChannel.java:816)
> >>>>>      at org.jgroups.JChannel.up(JChannel.java:741)
> >>>>>      at org.jgroups.stack.ProtocolStack.up(ProtocolStack.java:1030)
> >>>>>      at org.jgroups.protocols.FRAG2.up(FRAG2.java:165)
> >>>>>      at org.jgroups.protocols.FlowControl.up(FlowControl.java:392)
> >>>>>      at
> >>>> org.jgroups.protocols.UNICAST3.deliverMessage(UNICAST3.java:1064)
> >>>>>      at
> >>>>> org.jgroups.protocols.UNICAST3.handleDataReceived(UNICAST3.java:779)
> >>>>>      at org.jgroups.protocols.UNICAST3.up(UNICAST3.java:426)
> >>>>>      at
> >>>>>
> >>>>
> >>
> com.gemstone.gemfire.distributed.internal.membership.gms.messenger.StatRecorder.up(StatRecorder.java:69)
> >>>>>      at
> >>>>>
> >>>>
> >>
> com.gemstone.gemfire.distributed.internal.membership.gms.messenger.AddressManager.up(AddressManager.java:74)
> >>>>>      at org.jgroups.protocols.TP.passMessageUp(TP.java:1567)
> >>>>>      at org.jgroups.protocols.TP
> >>>> $SingleMessageHandler.run(TP.java:1783)
> >>>>>      at
> >> org.jgroups.util.DirectExecutor.execute(DirectExecutor.java:10)
> >>>>>      at org.jgroups.protocols.TP.handleSingleMessage(TP.java:1695)
> >>>>>      at org.jgroups.protocols.TP.receive(TP.java:1620)
> >>>>>      at
> >>>>>
> >>>>
> >>
> com.gemstone.gemfire.distributed.internal.membership.gms.messenger.Transport.receive(Transport.java:158)
> >>>>>      at org.jgroups.protocols.UDP$PacketReceiver.run(UDP.java:701)
> >>>>>      ... 1 more
> >>>>
> >>>>
> >>
> >>
>
>

Re: com.gemstone.gemfire.ForcedDisconnectException: Member isn't responding to heartbeat requests

Posted by Michael Stolz <ms...@pivotal.io>.
Overflow in Geode is really intended to be used as a way of dealing with a
TEMPORARY situation where there is too much data to fit in memory...NOT as
a steady state design pattern.

Making Geode disk based is an anti-pattern in my opinion.

--
Mike Stolz
Principal Engineer, GemFire Product Manager
Mobile: 631-835-4771

On Fri, Jun 17, 2016 at 1:04 PM, Mark Bretl <as...@gmail.com> wrote:

> + user@geode.i.a.o
>
> On Fri, Jun 17, 2016 at 8:49 AM, Anthony Baker <ab...@pivotal.io> wrote:
>
>> I would start by turning on GC logging and see if you have any long
>> collections.  A stop-the-world collection that takes longer than the member
>> timeout could cause the behavior you are observing.
>>
>> You may want to consider using an off-heap region.  Also check out the
>> other eviction algorithms (entry count, region size) to see if those would
>> be appropriate.
>>
>> Anthony
>>
>> > On Jun 17, 2016, at 8:33 AM, Avinash Dongre <do...@gmail.com>
>> wrote:
>> >
>> > Thanks Anthony
>> >
>> > *>>>>  At the CRITICAL threshold further writes are blocked. *
>> >
>> > Does client get any kind of warning/error or does put or putAll fails.
>> > My client is just putting KVs and not aware the server State. and I  am
>> > getting
>> >
>> > com.gemstone.gemfire.ForcedDisconnectException: Member isn't responding
>> to
>> > heartbeat requests, and after this server is attempting to reconnect.
>> >
>> > Thanks
>> > Avinash
>> >
>> >
>> >
>> > On Fri, Jun 17, 2016 at 7:50 PM, Anthony Baker <ab...@pivotal.io>
>> wrote:
>> >
>> >> You need enough heap to retain all your keys in memory.  Values begin
>> to
>> >> be evicted at the EVICTION threshold.  At the CRITICAL threshold
>> further
>> >> writes are blocked.  Your GC settings should be tuned to match these
>> >> thresholds (search the mailing list archives and/or wiki for some
>> advice on
>> >> GC tuning).
>> >>
>> >> Anthony
>> >>
>> >>> On Jun 17, 2016, at 6:42 AM, Avinash Dongre <dongre.avinash@gmail.com
>> >
>> >> wrote:
>> >>>
>> >>> Thanks Anthony,
>> >>>
>> >>> I know that my heap is not sufficient for the data I want to put.
>> >>> but My Region is configured as PARTITION_OVERFLOW, so I was hoping
>> that
>> >>> once Geode reaches to critical heap , it will start overflowing
>> >>> the data.
>> >>>
>> >>> Is this assumption correct ?
>> >>>
>> >>> thanks
>> >>> Avinash
>> >>>
>> >>>
>> >>> On Thu, Jun 16, 2016 at 8:21 PM, Anthony Baker <ab...@pivotal.io>
>> >> wrote:
>> >>>
>> >>>> Hi Avinash,
>> >>>>
>> >>>> The question to answer is “Why was a member removed from the
>> cluster?”
>> >>>> Some things to investigate:
>> >>>>
>> >>>> - Insufficient heap for the data volume
>> >>>> - Excessive GC causing the member to be unresponsive
>> >>>> - OutOfMemory errors in the log
>> >>>> - Overloaded CPU causing delayed heartbeats responses
>> >>>>
>> >>>> HTH,
>> >>>> Anthony
>> >>>>
>> >>>>> On Jun 16, 2016, at 6:48 AM, Avinash Dongre <
>> dongre.avinash@gmail.com>
>> >>>> wrote:
>> >>>>>
>> >>>>> Hello All,
>> >>>>>
>> >>>>> I am getting following exception when I try to load my system with
>> >> large
>> >>>>> amount of data.
>> >>>>>
>> >>>>> My Setup Details:
>> >>>>> 1 locator, 3 cacheservers with 8g and all the regions are disk
>> >>>> persistence
>> >>>>> enabled. ( All this is running on single AWS cluster node )
>> >>>>>
>> >>>>> Please give me some clues what I am missing here ?
>> >>>>>
>> >>>>>
>> >>>>> [severe 2016/06/16 12:51:15.552 UTC S1 <Notification Handler>
>> tid=0x40]
>> >>>>> Uncaught exception in thread Thread[Notification
>> >>>>> Handler,10,ResourceListenerInvokerThreadGroup]
>> >>>>>
>> >>
>> com.gemstone.gemfire.distributed.DistributedSystemDisconnectedException:
>> >>>>> DistributedSystem is shutting down, caused by
>> >>>>> com.gemstone.gemfire.ForcedDisconnectException: Member isn't
>> responding
>> >>>> to
>> >>>>> heartbeat requests
>> >>>>>      at
>> >>>>>
>> >>>>
>> >>
>> com.gemstone.gemfire.distributed.internal.membership.gms.mgr.GMSMembershipManager.directChannelSend(GMSMembershipManager.java:1719)
>> >>>>>      at
>> >>>>>
>> >>>>
>> >>
>> com.gemstone.gemfire.distributed.internal.membership.gms.mgr.GMSMembershipManager.send(GMSMembershipManager.java:1897)
>> >>>>>      at
>> >>>>>
>> >>>>
>> >>
>> com.gemstone.gemfire.distributed.internal.DistributionChannel.send(DistributionChannel.java:87)
>> >>>>>      at
>> >>>>>
>> >>>>
>> >>
>> com.gemstone.gemfire.distributed.internal.DistributionManager.sendOutgoing(DistributionManager.java:3427)
>> >>>>>      at
>> >>>>>
>> >>>>
>> >>
>> com.gemstone.gemfire.distributed.internal.DistributionManager.sendMessage(DistributionManager.java:3468)
>> >>>>>      at
>> >>>>>
>> >>>>
>> >>
>> com.gemstone.gemfire.distributed.internal.DistributionManager.putOutgoing(DistributionManager.java:1828)
>> >>>>>      at
>> >>>>>
>> >>>>
>> >>
>> com.gemstone.gemfire.internal.cache.control.ResourceAdvisor$ResourceProfileMessage.send(ResourceAdvisor.java:185)
>> >>>>>      at
>> >>>>>
>> >>>>
>> >>
>> com.gemstone.gemfire.internal.cache.control.ResourceAdvisor.updateRemoteProfile(ResourceAdvisor.java:448)
>> >>>>>      at
>> >>>>>
>> >>>>
>> >>
>> com.gemstone.gemfire.internal.cache.control.HeapMemoryMonitor.processLocalEvent(HeapMemoryMonitor.java:677)
>> >>>>>      at
>> >>>>>
>> >>>>
>> >>
>> com.gemstone.gemfire.internal.cache.control.HeapMemoryMonitor.updateStateAndSendEvent(HeapMemoryMonitor.java:485)
>> >>>>>      at
>> >>>>>
>> >>>>
>> >>
>> com.gemstone.gemfire.internal.cache.control.HeapMemoryMonitor.updateStateAndSendEvent(HeapMemoryMonitor.java:448)
>> >>>>>      at
>> >>>>>
>> >>>>
>> >>
>> com.gemstone.gemfire.internal.cache.control.HeapMemoryMonitor$2.run(HeapMemoryMonitor.java:718)
>> >>>>>      at
>> >>>>>
>> >>>>
>> >>
>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>> >>>>>      at
>> >>>>>
>> >>>>
>> >>
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>> >>>>>      at java.lang.Thread.run(Thread.java:745)
>> >>>>> Caused by: com.gemstone.gemfire.ForcedDisconnectException: Member
>> isn't
>> >>>>> responding to heartbeat requests
>> >>>>>      at
>> >>>>>
>> >>>>
>> >>
>> com.gemstone.gemfire.distributed.internal.membership.gms.mgr.GMSMembershipManager.forceDisconnect(GMSMembershipManager.java:2551)
>> >>>>>      at
>> >>>>>
>> >>>>
>> >>
>> com.gemstone.gemfire.distributed.internal.membership.gms.membership.GMSJoinLeave.forceDisconnect(GMSJoinLeave.java:885)
>> >>>>>      at
>> >>>>>
>> >>>>
>> >>
>> com.gemstone.gemfire.distributed.internal.membership.gms.membership.GMSJoinLeave.processRemoveRequest(GMSJoinLeave.java:578)
>> >>>>>      at
>> >>>>>
>> >>>>
>> >>
>> com.gemstone.gemfire.distributed.internal.membership.gms.membership.GMSJoinLeave.processMessage(GMSJoinLeave.java:1540)
>> >>>>>      at
>> >>>>>
>> >>>>
>> >>
>> com.gemstone.gemfire.distributed.internal.membership.gms.messenger.JGroupsMessenger$JGroupsReceiver.receive(JGroupsMessenger.java:1061)
>> >>>>>      at org.jgroups.JChannel.invokeCallback(JChannel.java:816)
>> >>>>>      at org.jgroups.JChannel.up(JChannel.java:741)
>> >>>>>      at org.jgroups.stack.ProtocolStack.up(ProtocolStack.java:1030)
>> >>>>>      at org.jgroups.protocols.FRAG2.up(FRAG2.java:165)
>> >>>>>      at org.jgroups.protocols.FlowControl.up(FlowControl.java:392)
>> >>>>>      at
>> >>>> org.jgroups.protocols.UNICAST3.deliverMessage(UNICAST3.java:1064)
>> >>>>>      at
>> >>>>> org.jgroups.protocols.UNICAST3.handleDataReceived(UNICAST3.java:779)
>> >>>>>      at org.jgroups.protocols.UNICAST3.up(UNICAST3.java:426)
>> >>>>>      at
>> >>>>>
>> >>>>
>> >>
>> com.gemstone.gemfire.distributed.internal.membership.gms.messenger.StatRecorder.up(StatRecorder.java:69)
>> >>>>>      at
>> >>>>>
>> >>>>
>> >>
>> com.gemstone.gemfire.distributed.internal.membership.gms.messenger.AddressManager.up(AddressManager.java:74)
>> >>>>>      at org.jgroups.protocols.TP.passMessageUp(TP.java:1567)
>> >>>>>      at org.jgroups.protocols.TP
>> >>>> $SingleMessageHandler.run(TP.java:1783)
>> >>>>>      at
>> >> org.jgroups.util.DirectExecutor.execute(DirectExecutor.java:10)
>> >>>>>      at org.jgroups.protocols.TP.handleSingleMessage(TP.java:1695)
>> >>>>>      at org.jgroups.protocols.TP.receive(TP.java:1620)
>> >>>>>      at
>> >>>>>
>> >>>>
>> >>
>> com.gemstone.gemfire.distributed.internal.membership.gms.messenger.Transport.receive(Transport.java:158)
>> >>>>>      at org.jgroups.protocols.UDP$PacketReceiver.run(UDP.java:701)
>> >>>>>      ... 1 more
>> >>>>
>> >>>>
>> >>
>> >>
>>
>>
>

Re: com.gemstone.gemfire.ForcedDisconnectException: Member isn't responding to heartbeat requests

Posted by Michael Stolz <ms...@pivotal.io>.
Overflow in Geode is really intended to be used as a way of dealing with a
TEMPORARY situation where there is too much data to fit in memory...NOT as
a steady state design pattern.

Making Geode disk based is an anti-pattern in my opinion.

--
Mike Stolz
Principal Engineer, GemFire Product Manager
Mobile: 631-835-4771

On Fri, Jun 17, 2016 at 1:04 PM, Mark Bretl <as...@gmail.com> wrote:

> + user@geode.i.a.o
>
> On Fri, Jun 17, 2016 at 8:49 AM, Anthony Baker <ab...@pivotal.io> wrote:
>
>> I would start by turning on GC logging and see if you have any long
>> collections.  A stop-the-world collection that takes longer than the member
>> timeout could cause the behavior you are observing.
>>
>> You may want to consider using an off-heap region.  Also check out the
>> other eviction algorithms (entry count, region size) to see if those would
>> be appropriate.
>>
>> Anthony
>>
>> > On Jun 17, 2016, at 8:33 AM, Avinash Dongre <do...@gmail.com>
>> wrote:
>> >
>> > Thanks Anthony
>> >
>> > *>>>>  At the CRITICAL threshold further writes are blocked. *
>> >
>> > Does client get any kind of warning/error or does put or putAll fails.
>> > My client is just putting KVs and not aware the server State. and I  am
>> > getting
>> >
>> > com.gemstone.gemfire.ForcedDisconnectException: Member isn't responding
>> to
>> > heartbeat requests, and after this server is attempting to reconnect.
>> >
>> > Thanks
>> > Avinash
>> >
>> >
>> >
>> > On Fri, Jun 17, 2016 at 7:50 PM, Anthony Baker <ab...@pivotal.io>
>> wrote:
>> >
>> >> You need enough heap to retain all your keys in memory.  Values begin
>> to
>> >> be evicted at the EVICTION threshold.  At the CRITICAL threshold
>> further
>> >> writes are blocked.  Your GC settings should be tuned to match these
>> >> thresholds (search the mailing list archives and/or wiki for some
>> advice on
>> >> GC tuning).
>> >>
>> >> Anthony
>> >>
>> >>> On Jun 17, 2016, at 6:42 AM, Avinash Dongre <dongre.avinash@gmail.com
>> >
>> >> wrote:
>> >>>
>> >>> Thanks Anthony,
>> >>>
>> >>> I know that my heap is not sufficient for the data I want to put.
>> >>> but My Region is configured as PARTITION_OVERFLOW, so I was hoping
>> that
>> >>> once Geode reaches to critical heap , it will start overflowing
>> >>> the data.
>> >>>
>> >>> Is this assumption correct ?
>> >>>
>> >>> thanks
>> >>> Avinash
>> >>>
>> >>>
>> >>> On Thu, Jun 16, 2016 at 8:21 PM, Anthony Baker <ab...@pivotal.io>
>> >> wrote:
>> >>>
>> >>>> Hi Avinash,
>> >>>>
>> >>>> The question to answer is “Why was a member removed from the
>> cluster?”
>> >>>> Some things to investigate:
>> >>>>
>> >>>> - Insufficient heap for the data volume
>> >>>> - Excessive GC causing the member to be unresponsive
>> >>>> - OutOfMemory errors in the log
>> >>>> - Overloaded CPU causing delayed heartbeats responses
>> >>>>
>> >>>> HTH,
>> >>>> Anthony
>> >>>>
>> >>>>> On Jun 16, 2016, at 6:48 AM, Avinash Dongre <
>> dongre.avinash@gmail.com>
>> >>>> wrote:
>> >>>>>
>> >>>>> Hello All,
>> >>>>>
>> >>>>> I am getting following exception when I try to load my system with
>> >> large
>> >>>>> amount of data.
>> >>>>>
>> >>>>> My Setup Details:
>> >>>>> 1 locator, 3 cacheservers with 8g and all the regions are disk
>> >>>> persistence
>> >>>>> enabled. ( All this is running on single AWS cluster node )
>> >>>>>
>> >>>>> Please give me some clues what I am missing here ?
>> >>>>>
>> >>>>>
>> >>>>> [severe 2016/06/16 12:51:15.552 UTC S1 <Notification Handler>
>> tid=0x40]
>> >>>>> Uncaught exception in thread Thread[Notification
>> >>>>> Handler,10,ResourceListenerInvokerThreadGroup]
>> >>>>>
>> >>
>> com.gemstone.gemfire.distributed.DistributedSystemDisconnectedException:
>> >>>>> DistributedSystem is shutting down, caused by
>> >>>>> com.gemstone.gemfire.ForcedDisconnectException: Member isn't
>> responding
>> >>>> to
>> >>>>> heartbeat requests
>> >>>>>      at
>> >>>>>
>> >>>>
>> >>
>> com.gemstone.gemfire.distributed.internal.membership.gms.mgr.GMSMembershipManager.directChannelSend(GMSMembershipManager.java:1719)
>> >>>>>      at
>> >>>>>
>> >>>>
>> >>
>> com.gemstone.gemfire.distributed.internal.membership.gms.mgr.GMSMembershipManager.send(GMSMembershipManager.java:1897)
>> >>>>>      at
>> >>>>>
>> >>>>
>> >>
>> com.gemstone.gemfire.distributed.internal.DistributionChannel.send(DistributionChannel.java:87)
>> >>>>>      at
>> >>>>>
>> >>>>
>> >>
>> com.gemstone.gemfire.distributed.internal.DistributionManager.sendOutgoing(DistributionManager.java:3427)
>> >>>>>      at
>> >>>>>
>> >>>>
>> >>
>> com.gemstone.gemfire.distributed.internal.DistributionManager.sendMessage(DistributionManager.java:3468)
>> >>>>>      at
>> >>>>>
>> >>>>
>> >>
>> com.gemstone.gemfire.distributed.internal.DistributionManager.putOutgoing(DistributionManager.java:1828)
>> >>>>>      at
>> >>>>>
>> >>>>
>> >>
>> com.gemstone.gemfire.internal.cache.control.ResourceAdvisor$ResourceProfileMessage.send(ResourceAdvisor.java:185)
>> >>>>>      at
>> >>>>>
>> >>>>
>> >>
>> com.gemstone.gemfire.internal.cache.control.ResourceAdvisor.updateRemoteProfile(ResourceAdvisor.java:448)
>> >>>>>      at
>> >>>>>
>> >>>>
>> >>
>> com.gemstone.gemfire.internal.cache.control.HeapMemoryMonitor.processLocalEvent(HeapMemoryMonitor.java:677)
>> >>>>>      at
>> >>>>>
>> >>>>
>> >>
>> com.gemstone.gemfire.internal.cache.control.HeapMemoryMonitor.updateStateAndSendEvent(HeapMemoryMonitor.java:485)
>> >>>>>      at
>> >>>>>
>> >>>>
>> >>
>> com.gemstone.gemfire.internal.cache.control.HeapMemoryMonitor.updateStateAndSendEvent(HeapMemoryMonitor.java:448)
>> >>>>>      at
>> >>>>>
>> >>>>
>> >>
>> com.gemstone.gemfire.internal.cache.control.HeapMemoryMonitor$2.run(HeapMemoryMonitor.java:718)
>> >>>>>      at
>> >>>>>
>> >>>>
>> >>
>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>> >>>>>      at
>> >>>>>
>> >>>>
>> >>
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>> >>>>>      at java.lang.Thread.run(Thread.java:745)
>> >>>>> Caused by: com.gemstone.gemfire.ForcedDisconnectException: Member
>> isn't
>> >>>>> responding to heartbeat requests
>> >>>>>      at
>> >>>>>
>> >>>>
>> >>
>> com.gemstone.gemfire.distributed.internal.membership.gms.mgr.GMSMembershipManager.forceDisconnect(GMSMembershipManager.java:2551)
>> >>>>>      at
>> >>>>>
>> >>>>
>> >>
>> com.gemstone.gemfire.distributed.internal.membership.gms.membership.GMSJoinLeave.forceDisconnect(GMSJoinLeave.java:885)
>> >>>>>      at
>> >>>>>
>> >>>>
>> >>
>> com.gemstone.gemfire.distributed.internal.membership.gms.membership.GMSJoinLeave.processRemoveRequest(GMSJoinLeave.java:578)
>> >>>>>      at
>> >>>>>
>> >>>>
>> >>
>> com.gemstone.gemfire.distributed.internal.membership.gms.membership.GMSJoinLeave.processMessage(GMSJoinLeave.java:1540)
>> >>>>>      at
>> >>>>>
>> >>>>
>> >>
>> com.gemstone.gemfire.distributed.internal.membership.gms.messenger.JGroupsMessenger$JGroupsReceiver.receive(JGroupsMessenger.java:1061)
>> >>>>>      at org.jgroups.JChannel.invokeCallback(JChannel.java:816)
>> >>>>>      at org.jgroups.JChannel.up(JChannel.java:741)
>> >>>>>      at org.jgroups.stack.ProtocolStack.up(ProtocolStack.java:1030)
>> >>>>>      at org.jgroups.protocols.FRAG2.up(FRAG2.java:165)
>> >>>>>      at org.jgroups.protocols.FlowControl.up(FlowControl.java:392)
>> >>>>>      at
>> >>>> org.jgroups.protocols.UNICAST3.deliverMessage(UNICAST3.java:1064)
>> >>>>>      at
>> >>>>> org.jgroups.protocols.UNICAST3.handleDataReceived(UNICAST3.java:779)
>> >>>>>      at org.jgroups.protocols.UNICAST3.up(UNICAST3.java:426)
>> >>>>>      at
>> >>>>>
>> >>>>
>> >>
>> com.gemstone.gemfire.distributed.internal.membership.gms.messenger.StatRecorder.up(StatRecorder.java:69)
>> >>>>>      at
>> >>>>>
>> >>>>
>> >>
>> com.gemstone.gemfire.distributed.internal.membership.gms.messenger.AddressManager.up(AddressManager.java:74)
>> >>>>>      at org.jgroups.protocols.TP.passMessageUp(TP.java:1567)
>> >>>>>      at org.jgroups.protocols.TP
>> >>>> $SingleMessageHandler.run(TP.java:1783)
>> >>>>>      at
>> >> org.jgroups.util.DirectExecutor.execute(DirectExecutor.java:10)
>> >>>>>      at org.jgroups.protocols.TP.handleSingleMessage(TP.java:1695)
>> >>>>>      at org.jgroups.protocols.TP.receive(TP.java:1620)
>> >>>>>      at
>> >>>>>
>> >>>>
>> >>
>> com.gemstone.gemfire.distributed.internal.membership.gms.messenger.Transport.receive(Transport.java:158)
>> >>>>>      at org.jgroups.protocols.UDP$PacketReceiver.run(UDP.java:701)
>> >>>>>      ... 1 more
>> >>>>
>> >>>>
>> >>
>> >>
>>
>>
>