You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@geode.apache.org by Pieter van Zyl <pi...@lautus.net> on 2018/04/30 10:47:16 UTC

NotSerializableException: org.apache.geode.internal.cache.Token$NotAvailable after server restart

Good day.

I am constantly seeing this error below when we stop and start Geode server
after a data import.

When the client connects the second time after the restart we get
NotSerializableException:
org.apache.geode.internal.cache.Token$NotAvailable

Any ideas why we are getting this error or why it would state
"NotAvailable"?

*Versions:*

compile 'org.springframework.data:spring-data-geode:2.1.0.M2'
compile group: 'org.apache.geode', name: 'geode-core', version: '1.5.0'

Trying to access this region on startup:





*<gfe:replicated-region id="ClassID-ClassName-LookUp"
 disk-store-ref="tauDiskStore"                       persistent="true">
<gfe:eviction type="HEAP_PERCENTAGE"
action="OVERFLOW_TO_DISK"/></gfe:replicated-region>*

*Server config:*











*<util:properties id="gemfire-props"><prop key="log-level">info</prop><prop
key="locators">pvz-dell[10334]</prop><prop
key="start-locator">pvz-dell[10334]</prop><prop
key="mcast-port">0</prop><prop key="http-service-port">0</prop><prop
key="jmx-manager">true</prop><prop key="jmx-manager-port">1099</prop><prop
key="jmx-manager-start">true</prop></util:properties>*










*<gfe:cache properties-ref="gemfire-props"
pdx-serializer-ref="pdxSerializer"
pdx-persistent="true"pdx-disk-store="pdx-disk-store" /><gfe:cache-server
port="40404" max-connections="300" socket-buffer-size="65536"
max-threads="200"/><gfe:transaction-manager id="txManager"/><bean
id="pdxSerializer"
class="org.rdb.geode.mapping.RDBGeodeSerializer"> <constructor-arg
value="org.rdb.*,net.lautus.*"/></bean>*

The server seems to be up and running
*Cache server connection listener bound to address
pvz-dell-/0:0:0:0:0:0:0:0:40404 with backlog 1,000.*

*[info 2018/04/30 12:32:30.483 SAST <main> tid=0x1]
ClientHealthMonitorThread maximum allowed time between pings: 60,000*

*[warn 2018/04/30 12:32:30.485 SAST <main> tid=0x1] Handshaker max Pool
size: 4*

*[info 2018/04/30 12:32:30.486 SAST <Cache Server Selector
/0:0:0:0:0:0:0:0:40404 local port: 40404> tid=0x4f] SELECTOR enabled*

*[info 2018/04/30 12:32:30.491 SAST <main> tid=0x1] CacheServer
Configuration:   port=40404 max-connections=300 max-threads=200
notify-by-subscription=true socket-buffer-size=65536
maximum-time-between-pings=60000 maximum-message-count=230000
message-time-to-live=180 eviction-policy=none capacity=1 overflow
directory=. groups=[] loadProbe=ConnectionCountProbe loadPollInterval=5000
tcpNoDelay=true*

*server running on port 40404*
*Press <Enter> to terminate the server*


Exception in thread "main"
org.apache.geode.cache.client.ServerOperationException: remote server on
pvz-dell(23128:loner):38042:2edf1c16:
org.apache.geode.SerializationException: failed serializing object
at
org.apache.geode.cache.client.internal.OpExecutorImpl.handleException(OpExecutorImpl.java:669)
at
org.apache.geode.cache.client.internal.OpExecutorImpl.handleException(OpExecutorImpl.java:742)
at
org.apache.geode.cache.client.internal.OpExecutorImpl.handleException(OpExecutorImpl.java:611)
at
org.apache.geode.cache.client.internal.OpExecutorImpl.executeOnServer(OpExecutorImpl.java:373)
at
org.apache.geode.cache.client.internal.OpExecutorImpl.executeWithServerAffinity(OpExecutorImpl.java:220)
at
org.apache.geode.cache.client.internal.OpExecutorImpl.execute(OpExecutorImpl.java:129)
at
org.apache.geode.cache.client.internal.OpExecutorImpl.execute(OpExecutorImpl.java:116)
at
org.apache.geode.cache.client.internal.PoolImpl.execute(PoolImpl.java:774)
at org.apache.geode.cache.client.internal.GetOp.execute(GetOp.java:91)
at
org.apache.geode.cache.client.internal.ServerRegionProxy.get(ServerRegionProxy.java:113)
at
org.apache.geode.internal.cache.tx.ClientTXRegionStub.findObject(ClientTXRegionStub.java:72)
at
org.apache.geode.internal.cache.TXStateStub.findObject(TXStateStub.java:453)
at
org.apache.geode.internal.cache.TXStateProxyImpl.findObject(TXStateProxyImpl.java:496)
at org.apache.geode.internal.cache.LocalRegion.get(LocalRegion.java:1366)
at org.apache.geode.internal.cache.LocalRegion.get(LocalRegion.java:1300)
at org.apache.geode.internal.cache.LocalRegion.get(LocalRegion.java:1285)
at
org.apache.geode.internal.cache.AbstractRegion.get(AbstractRegion.java:320)
......
Caused by: org.apache.geode.SerializationException: failed serializing
object
at
org.apache.geode.internal.cache.tier.sockets.Message.serializeAndAddPart(Message.java:399)
at
org.apache.geode.internal.cache.tier.sockets.Message.addPartInAnyForm(Message.java:360)
at
org.apache.geode.internal.cache.tier.sockets.command.Get70.writeResponse(Get70.java:424)
at
org.apache.geode.internal.cache.tier.sockets.command.Get70.cmdExecute(Get70.java:211)
at
org.apache.geode.internal.cache.tier.sockets.BaseCommand.execute(BaseCommand.java:157)
at
org.apache.geode.internal.cache.tier.sockets.ServerConnection.doNormalMsg(ServerConnection.java:797)
at
org.apache.geode.internal.cache.tier.sockets.LegacyServerConnection.doOneMessage(LegacyServerConnection.java:85)
at
org.apache.geode.internal.cache.tier.sockets.ServerConnection.run(ServerConnection.java:1148)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at
org.apache.geode.internal.cache.tier.sockets.AcceptorImpl$4$1.run(AcceptorImpl.java:641)
at java.lang.Thread.run(Thread.java:748)
*Caused by: java.io.NotSerializableException:
org.apache.geode.internal.cache.Token$NotAvailable*
at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1184)
at java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:348)
at
org.apache.geode.internal.InternalDataSerializer.writeSerializableObject(InternalDataSerializer.java:2341)
at
org.apache.geode.internal.InternalDataSerializer.basicWriteObject(InternalDataSerializer.java:2216)
at org.apache.geode.DataSerializer.writeObject(DataSerializer.java:2936)
at org.apache.geode.internal.util.BlobHelper.serializeTo(BlobHelper.java:66)
at
org.apache.geode.internal.cache.tier.sockets.Message.serializeAndAddPart(Message.java:397)


Kindly
Pieter

Re: NotSerializableException: org.apache.geode.internal.cache.Token$NotAvailable after server restart

Posted by Eric Shu <es...@pivotal.io>.
Hi Piete,

This happens after a server restart. I verified that the transaction
"works" with persistent overflow region if the server is not being shutdown
and restarted -- even if the entry is evicted and overflowed to disk.

We will try to fix this issue as the persistent recovery needs to work.

Regards,
Eric



On Wed, May 2, 2018 at 2:07 PM, Pieter van Zyl <pi...@lautus.net>
wrote:

> Hi Dan,
>
> Thanks for tracking this down!
>
> Much appreciated.
>
> This might also be why I didn't see it at first as we didn't activate the
> transactions on the persistent regions when we started with this evaluation.
>
> Based on this discussion
>
> https://markmail.org/message/jsabcdvyzsdrkvba?q=list:org%
> 2Eapache%2Egeode%2Euser+order:date-backward+pieter#query:
> list%3Aorg.apache.geode.user%20order%3Adate-backward%20pieter+page:1+mid:
> n25nznu7zur4xmar+state:results
>
> We are currently using  -Dgemfire.ALLOW_PERSISTENT_TRANSACTIONS=true
>
> Once we have the basics up and running we will still look at the
> TransactionWriter as recommended.
>
> We are currently trying to import our old data from Berkeley into Geode
> and for now I have one node locally with a replicated region.
> But we are planning to move to more nodes and partition/sharded regions.
>
> I assume this will happen on partitioned regions as well as the issue is
> the combination of transactions on persistent regions and overflow.
>
> Also I see this bug is marked as *major* so is there any chance this will
> be fixed in the next couple of months?
> Or is our use of transactions across persistent regions just to out of the
> norm?
>
> If I do change the region to not use overflow what will happen when it
> reaches the "heap percentage"?
>
> Kindly
> Pieter
>
>
>
> On Wed, May 2, 2018 at 10:14 PM, Dan Smith <ds...@pivotal.io> wrote:
>
>> I created GEODE-5173 for this issue.
>>
>> Thanks,
>> -Dan
>>
>> On Wed, May 2, 2018 at 12:17 PM, Dan Smith <ds...@pivotal.io> wrote:
>>
>>> Hi Pieter,
>>>
>>> I was able to reproduce this problem. It looks like it is an issue with
>>> doing a get inside of a transaction along with a replicated region using
>>> persistence and overflow. The value is still on disk, and for whatever
>>> reason if you do the get inside of a transaction it is returning you this
>>> bogus NOT_AVAILABLE token instead of reading the value off disk.
>>>
>>> I'll create a JIRA and attach my test. In the meantime, you could do the
>>> get outside of a transaction, or you could change your region to not use
>>> overflow. If you try changing the region to not use overflow, I think
>>> you'll also have to set the system property gemfire.disk.recoverValuesSync
>>> to true to make sure that in all cases you never have to read from disk.
>>>
>>> Thanks,
>>> -Dan
>>>
>>> On Mon, Apr 30, 2018 at 3:47 AM, Pieter van Zyl <
>>> pieter.van.zyl@lautus.net> wrote:
>>>
>>>> Good day.
>>>>
>>>> I am constantly seeing this error below when we stop and start Geode
>>>> server after a data import.
>>>>
>>>> When the client connects the second time after the restart we get NotSerializableException:
>>>> org.apache.geode.internal.cache.Token$NotAvailable
>>>>
>>>> Any ideas why we are getting this error or why it would state
>>>> "NotAvailable"?
>>>>
>>>> *Versions:*
>>>>
>>>> compile 'org.springframework.data:spring-data-geode:2.1.0.M2'
>>>> compile group: 'org.apache.geode', name: 'geode-core', version: '1.5.0'
>>>>
>>>> Trying to access this region on startup:
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> *<gfe:replicated-region id="ClassID-ClassName-LookUp"
>>>>      disk-store-ref="tauDiskStore"
>>>>  persistent="true">    <gfe:eviction type="HEAP_PERCENTAGE"
>>>> action="OVERFLOW_TO_DISK"/></gfe:replicated-region>*
>>>>
>>>> *Server config:*
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> *<util:properties id="gemfire-props"><prop
>>>> key="log-level">info</prop><prop key="locators">pvz-dell[10334]</prop><prop
>>>> key="start-locator">pvz-dell[10334]</prop><prop
>>>> key="mcast-port">0</prop><prop key="http-service-port">0</prop><prop
>>>> key="jmx-manager">true</prop><prop key="jmx-manager-port">1099</prop><prop
>>>> key="jmx-manager-start">true</prop></util:properties>*
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> *<gfe:cache properties-ref="gemfire-props"
>>>> pdx-serializer-ref="pdxSerializer"
>>>> pdx-persistent="true"pdx-disk-store="pdx-disk-store" /><gfe:cache-server
>>>> port="40404" max-connections="300" socket-buffer-size="65536"
>>>> max-threads="200"/><gfe:transaction-manager id="txManager"/><bean
>>>> id="pdxSerializer"
>>>> class="org.rdb.geode.mapping.RDBGeodeSerializer"> <constructor-arg
>>>> value="org.rdb.*,net.lautus.*"/></bean>*
>>>>
>>>> The server seems to be up and running
>>>> *Cache server connection listener bound to address
>>>> pvz-dell-/0:0:0:0:0:0:0:0:40404 with backlog 1,000.*
>>>>
>>>> *[info 2018/04/30 12:32:30.483 SAST <main> tid=0x1]
>>>> ClientHealthMonitorThread maximum allowed time between pings: 60,000*
>>>>
>>>> *[warn 2018/04/30 12:32:30.485 SAST <main> tid=0x1] Handshaker max Pool
>>>> size: 4*
>>>>
>>>> *[info 2018/04/30 12:32:30.486 SAST <Cache Server Selector
>>>> /0:0:0:0:0:0:0:0:40404 local port: 40404> tid=0x4f] SELECTOR enabled*
>>>>
>>>> *[info 2018/04/30 12:32:30.491 SAST <main> tid=0x1] CacheServer
>>>> Configuration:   port=40404 max-connections=300 max-threads=200
>>>> notify-by-subscription=true socket-buffer-size=65536
>>>> maximum-time-between-pings=60000 maximum-message-count=230000
>>>> message-time-to-live=180 eviction-policy=none capacity=1 overflow
>>>> directory=. groups=[] loadProbe=ConnectionCountProbe loadPollInterval=5000
>>>> tcpNoDelay=true*
>>>>
>>>> *server running on port 40404*
>>>> *Press <Enter> to terminate the server*
>>>>
>>>>
>>>> Exception in thread "main" org.apache.geode.cache.client.ServerOperationException:
>>>> remote server on pvz-dell(23128:loner):38042:2edf1c16:
>>>> org.apache.geode.SerializationException: failed serializing object
>>>> at org.apache.geode.cache.client.internal.OpExecutorImpl.handle
>>>> Exception(OpExecutorImpl.java:669)
>>>> at org.apache.geode.cache.client.internal.OpExecutorImpl.handle
>>>> Exception(OpExecutorImpl.java:742)
>>>> at org.apache.geode.cache.client.internal.OpExecutorImpl.handle
>>>> Exception(OpExecutorImpl.java:611)
>>>> at org.apache.geode.cache.client.internal.OpExecutorImpl.execut
>>>> eOnServer(OpExecutorImpl.java:373)
>>>> at org.apache.geode.cache.client.internal.OpExecutorImpl.execut
>>>> eWithServerAffinity(OpExecutorImpl.java:220)
>>>> at org.apache.geode.cache.client.internal.OpExecutorImpl.execut
>>>> e(OpExecutorImpl.java:129)
>>>> at org.apache.geode.cache.client.internal.OpExecutorImpl.execut
>>>> e(OpExecutorImpl.java:116)
>>>> at org.apache.geode.cache.client.internal.PoolImpl.execute(Pool
>>>> Impl.java:774)
>>>> at org.apache.geode.cache.client.internal.GetOp.execute(GetOp.java:91)
>>>> at org.apache.geode.cache.client.internal.ServerRegionProxy.get
>>>> (ServerRegionProxy.java:113)
>>>> at org.apache.geode.internal.cache.tx.ClientTXRegionStub.findOb
>>>> ject(ClientTXRegionStub.java:72)
>>>> at org.apache.geode.internal.cache.TXStateStub.findObject(TXSta
>>>> teStub.java:453)
>>>> at org.apache.geode.internal.cache.TXStateProxyImpl.findObject(
>>>> TXStateProxyImpl.java:496)
>>>> at org.apache.geode.internal.cache.LocalRegion.get(LocalRegion.
>>>> java:1366)
>>>> at org.apache.geode.internal.cache.LocalRegion.get(LocalRegion.
>>>> java:1300)
>>>> at org.apache.geode.internal.cache.LocalRegion.get(LocalRegion.
>>>> java:1285)
>>>> at org.apache.geode.internal.cache.AbstractRegion.get(AbstractR
>>>> egion.java:320)
>>>> ......
>>>> Caused by: org.apache.geode.SerializationException: failed serializing
>>>> object
>>>> at org.apache.geode.internal.cache.tier.sockets.Message.seriali
>>>> zeAndAddPart(Message.java:399)
>>>> at org.apache.geode.internal.cache.tier.sockets.Message.addPart
>>>> InAnyForm(Message.java:360)
>>>> at org.apache.geode.internal.cache.tier.sockets.command.Get70.w
>>>> riteResponse(Get70.java:424)
>>>> at org.apache.geode.internal.cache.tier.sockets.command.Get70.c
>>>> mdExecute(Get70.java:211)
>>>> at org.apache.geode.internal.cache.tier.sockets.BaseCommand.exe
>>>> cute(BaseCommand.java:157)
>>>> at org.apache.geode.internal.cache.tier.sockets.ServerConnectio
>>>> n.doNormalMsg(ServerConnection.java:797)
>>>> at org.apache.geode.internal.cache.tier.sockets.LegacyServerCon
>>>> nection.doOneMessage(LegacyServerConnection.java:85)
>>>> at org.apache.geode.internal.cache.tier.sockets.ServerConnectio
>>>> n.run(ServerConnection.java:1148)
>>>> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPool
>>>> Executor.java:1149)
>>>> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoo
>>>> lExecutor.java:624)
>>>> at org.apache.geode.internal.cache.tier.sockets.AcceptorImpl$4$
>>>> 1.run(AcceptorImpl.java:641)
>>>> at java.lang.Thread.run(Thread.java:748)
>>>> *Caused by: java.io <http://java.io>.NotSerializableException:
>>>> org.apache.geode.internal.cache.Token$NotAvailable*
>>>> at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.j
>>>> ava:1184)
>>>> at java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:348)
>>>> at org.apache.geode.internal.InternalDataSerializer.writeSerial
>>>> izableObject(InternalDataSerializer.java:2341)
>>>> at org.apache.geode.internal.InternalDataSerializer.basicWriteO
>>>> bject(InternalDataSerializer.java:2216)
>>>> at org.apache.geode.DataSerializer.writeObject(DataSerializer.j
>>>> ava:2936)
>>>> at org.apache.geode.internal.util.BlobHelper.serializeTo(BlobHe
>>>> lper.java:66)
>>>> at org.apache.geode.internal.cache.tier.sockets.Message.seriali
>>>> zeAndAddPart(Message.java:397)
>>>>
>>>>
>>>> Kindly
>>>> Pieter
>>>>
>>>
>>>
>>
>

Re: NotSerializableException: org.apache.geode.internal.cache.Token$NotAvailable after server restart

Posted by Pieter van Zyl <pi...@lautus.net>.
Hi Eric

Thanks for clearing that up. Suspected that was what was happening.

I guess I misread the documentation and thought it would just evict from
memory.

Looking forward to 1.7

@John. Thanks for the link.

Kindly
Pieter


On Tue, May 22, 2018 at 7:30 PM, Eric Shu <es...@pivotal.io> wrote:

> If you set TTL invalidate or destroy, data will be reflected in the
> persistent layer as well. It is same as you perform a invalidate or destroy
> on an entry.
>
> The original issue has been fixed on 1.7 (see https://issues.apache.
> org/jira/browse/GEODE-5173). Transaction will be working with eviction
> overflow on restart.
>
>
> On Tue, May 22, 2018 at 8:57 AM, Pieter van Zyl <pieter.van.zyl@lautus.net
> > wrote:
>
>> Hi guys,
>>
>> Just a question wrt to this topic.
>>
>> I can see the main issue has been fixed on 1.7.0 according to Jira....
>>
>> https://issues.apache.org/jira/browse/GEODE-5173
>> Tried to get the snapshot but cannot get it to work. As it seems to only
>> allow clients of version 1.5.0 and the spring-date-geode version still
>> requires 1.6.0.
>> But this is off-topic and another question for later today.
>>
>> In the mean time I have tried to use *Expiration* with a *persistent*
>> region and *transactions*
>>
>> Currently we are trying to import data from our old database into Geode.
>>
>> So the region was:
>>
>>
>>
>>
>>
>> *<gfe:replicated-region id="ClassID-ClassName-LookUp"
>>    disk-store-ref="tauDiskStore"                       persistent="true">
>>   <gfe:eviction type="HEAP_PERCENTAGE"
>> action="OVERFLOW_TO_DISK"/></gfe:replicated-region>*
>>
>> *Changed to*
>>
>>
>>
>>
>>
>>
>>
>> *<gfe:replicated-region id="ClassName-ClassID-LookUp"
>> disk-store-ref="tauDiskStore" statistics="true" persistent="true">
>>  <gfe:region-ttl timeout="60"
>> action="INVALIDATE"/></gfe:replicated-region><gfe:disk-store
>> id="tauDiskStore">    <gfe:disk-dir
>> location="geode/tauDiskStore"/></gfe:disk-store>*
>>
>>
>> But after running the import and testing if we can read the data then all
>> the data is there.
>> But as soon as I restart the server and check again the data is not
>> there.
>>
>> I would have thought that after TTL the data would be
>> invalidated/destroyed in the in-memory region/cache but would still be on
>> disk as this is a persistent region?
>>
>> I am I wrong to expect that this combination should still have ALL the
>> data persisted on disk after a restart?
>>
>> https://geode.apache.org/docs/guide/11/developing/eviction/c
>> onfiguring_data_eviction.html
>> https://geode.apache.org/docs/guide/11/developing/storing_da
>> ta_on_disk/how_persist_overflow_work.html
>>
>>
>> Kindly
>> Pieter
>>
>> On Fri, May 4, 2018 at 7:32 PM, Anilkumar Gingade <ag...@pivotal.io>
>> wrote:
>>
>>> Setting eviction overflow helps keeping the system running out-of memory
>>> in critical situations. Its true for both persistent and non-persistent
>>> region. In case of persistent region, if overflow is not set, the data is
>>> both in-memory and disk.
>>>
>>> One way to handle the memory situation is through resource manager, but
>>> if the system is under memory pressure, it may impact the system
>>> performance.
>>>
>>> -Anil
>>>
>>>
>>> On Fri, May 4, 2018 at 4:10 AM, Pieter van Zyl <
>>> pieter.van.zyl@lautus.net> wrote:
>>>
>>>> Good day.
>>>>
>>>> Thanks again for all the feedback.
>>>>
>>>> I hope the bug will get sorted out.
>>>>
>>>> For now I have removed the eviction policies and there error is no more
>>>> after a restart.
>>>>
>>>> I assume that if one uses persistent regions, then the
>>>> eviction+overflow is not that critical as the data will be "backed" in the
>>>> store/disk. One just need enough memory.
>>>> Eviction+Overflow I suspect is quite critical when one has a full
>>>> in-memory grid and running out of memory could cause issues if there is no
>>>> overflow to disk?
>>>>
>>>> I am thinking that for now I could look at *expiration* rather on the
>>>> region? To keep only *relevant* data in the in-memory regions for now
>>>> to prevent running out of memory.
>>>> Will try and keep data in memory for as long as possible.
>>>>
>>>> Currently we cannot remove the transactions that we use with the
>>>> persistent regions. We might in the future.
>>>>
>>>> Kindly
>>>> Pieter
>>>>
>>>>
>>>> On Thu, May 3, 2018 at 1:16 AM, Dan Smith <ds...@pivotal.io> wrote:
>>>>
>>>>> > I assume this will happen on partitioned regions as well as the
>>>>> issue is the combination of transactions on persistent regions and overflow.
>>>>>
>>>>> Unfortunately yes, this bug also affects partitioned regions
>>>>>
>>>>> > Also I see this bug is marked as *major* so is there any chance
>>>>> this will be fixed in the next couple of months?
>>>>>
>>>>> I'm not sure. Geode is an open source project, so we don't really
>>>>> promise fixes in any specific timeframe.
>>>>>
>>>>> > If I do change the region to not use overflow what will happen when
>>>>> it reaches the "heap percentage"?
>>>>>
>>>>> The data will stay in memory. Oveflow lets you avoid running out of
>>>>> memory by overflowing data to disk. Without that you could end up running
>>>>> out of memory if your region gets to large.
>>>>>
>>>>> -Dan
>>>>>
>>>>> On Wed, May 2, 2018 at 2:07 PM, Pieter van Zyl <
>>>>> pieter.van.zyl@lautus.net> wrote:
>>>>>
>>>>>> Hi Dan,
>>>>>>
>>>>>> Thanks for tracking this down!
>>>>>>
>>>>>> Much appreciated.
>>>>>>
>>>>>> This might also be why I didn't see it at first as we didn't activate
>>>>>> the transactions on the persistent regions when we started with this
>>>>>> evaluation.
>>>>>>
>>>>>> Based on this discussion
>>>>>>
>>>>>> https://markmail.org/message/jsabcdvyzsdrkvba?q=list:org%2Ea
>>>>>> pache%2Egeode%2Euser+order:date-backward+pieter#query:list%3
>>>>>> Aorg.apache.geode.user%20order%3Adate-backward%20pieter+page
>>>>>> :1+mid:n25nznu7zur4xmar+state:results
>>>>>>
>>>>>> We are currently using  -Dgemfire.ALLOW_PERSISTENT_TRANSACTIONS=true
>>>>>>
>>>>>> Once we have the basics up and running we will still look at the
>>>>>> TransactionWriter as recommended.
>>>>>>
>>>>>> We are currently trying to import our old data from Berkeley into
>>>>>> Geode and for now I have one node locally with a replicated region.
>>>>>> But we are planning to move to more nodes and partition/sharded
>>>>>> regions.
>>>>>>
>>>>>> I assume this will happen on partitioned regions as well as the issue
>>>>>> is the combination of transactions on persistent regions and overflow.
>>>>>>
>>>>>> Also I see this bug is marked as *major* so is there any chance this
>>>>>> will be fixed in the next couple of months?
>>>>>> Or is our use of transactions across persistent regions just to out
>>>>>> of the norm?
>>>>>>
>>>>>> If I do change the region to not use overflow what will happen when
>>>>>> it reaches the "heap percentage"?
>>>>>>
>>>>>> Kindly
>>>>>> Pieter
>>>>>>
>>>>>>
>>>>>>
>>>>>> On Wed, May 2, 2018 at 10:14 PM, Dan Smith <ds...@pivotal.io> wrote:
>>>>>>
>>>>>>> I created GEODE-5173 for this issue.
>>>>>>>
>>>>>>> Thanks,
>>>>>>> -Dan
>>>>>>>
>>>>>>> On Wed, May 2, 2018 at 12:17 PM, Dan Smith <ds...@pivotal.io>
>>>>>>> wrote:
>>>>>>>
>>>>>>>> Hi Pieter,
>>>>>>>>
>>>>>>>> I was able to reproduce this problem. It looks like it is an issue
>>>>>>>> with doing a get inside of a transaction along with a replicated region
>>>>>>>> using persistence and overflow. The value is still on disk, and for
>>>>>>>> whatever reason if you do the get inside of a transaction it is returning
>>>>>>>> you this bogus NOT_AVAILABLE token instead of reading the value off disk.
>>>>>>>>
>>>>>>>> I'll create a JIRA and attach my test. In the meantime, you could
>>>>>>>> do the get outside of a transaction, or you could change your region to not
>>>>>>>> use overflow. If you try changing the region to not use overflow, I think
>>>>>>>> you'll also have to set the system property gemfire.disk.recoverValuesSync
>>>>>>>> to true to make sure that in all cases you never have to read from disk.
>>>>>>>>
>>>>>>>> Thanks,
>>>>>>>> -Dan
>>>>>>>>
>>>>>>>> On Mon, Apr 30, 2018 at 3:47 AM, Pieter van Zyl <
>>>>>>>> pieter.van.zyl@lautus.net> wrote:
>>>>>>>>
>>>>>>>>> Good day.
>>>>>>>>>
>>>>>>>>> I am constantly seeing this error below when we stop and start
>>>>>>>>> Geode server after a data import.
>>>>>>>>>
>>>>>>>>> When the client connects the second time after the restart we get NotSerializableException:
>>>>>>>>> org.apache.geode.internal.cache.Token$NotAvailable
>>>>>>>>>
>>>>>>>>> Any ideas why we are getting this error or why it would state
>>>>>>>>> "NotAvailable"?
>>>>>>>>>
>>>>>>>>> *Versions:*
>>>>>>>>>
>>>>>>>>> compile 'org.springframework.data:spring-data-geode:2.1.0.M2'
>>>>>>>>> compile group: 'org.apache.geode', name: 'geode-core', version:
>>>>>>>>> '1.5.0'
>>>>>>>>>
>>>>>>>>> Trying to access this region on startup:
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> *<gfe:replicated-region id="ClassID-ClassName-LookUp"
>>>>>>>>>          disk-store-ref="tauDiskStore"
>>>>>>>>>  persistent="true">    <gfe:eviction type="HEAP_PERCENTAGE"
>>>>>>>>> action="OVERFLOW_TO_DISK"/></gfe:replicated-region>*
>>>>>>>>>
>>>>>>>>> *Server config:*
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> *<util:properties id="gemfire-props"><prop
>>>>>>>>> key="log-level">info</prop><prop key="locators">pvz-dell[10334]</prop><prop
>>>>>>>>> key="start-locator">pvz-dell[10334]</prop><prop
>>>>>>>>> key="mcast-port">0</prop><prop key="http-service-port">0</prop><prop
>>>>>>>>> key="jmx-manager">true</prop><prop key="jmx-manager-port">1099</prop><prop
>>>>>>>>> key="jmx-manager-start">true</prop></util:properties>*
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> *<gfe:cache properties-ref="gemfire-props"
>>>>>>>>> pdx-serializer-ref="pdxSerializer"
>>>>>>>>> pdx-persistent="true"pdx-disk-store="pdx-disk-store" /><gfe:cache-server
>>>>>>>>> port="40404" max-connections="300" socket-buffer-size="65536"
>>>>>>>>> max-threads="200"/><gfe:transaction-manager id="txManager"/><bean
>>>>>>>>> id="pdxSerializer"
>>>>>>>>> class="org.rdb.geode.mapping.RDBGeodeSerializer"> <constructor-arg
>>>>>>>>> value="org.rdb.*,net.lautus.*"/></bean>*
>>>>>>>>>
>>>>>>>>> The server seems to be up and running
>>>>>>>>> *Cache server connection listener bound to address
>>>>>>>>> pvz-dell-/0:0:0:0:0:0:0:0:40404 with backlog 1,000.*
>>>>>>>>>
>>>>>>>>> *[info 2018/04/30 12:32:30.483 SAST <main> tid=0x1]
>>>>>>>>> ClientHealthMonitorThread maximum allowed time between pings: 60,000*
>>>>>>>>>
>>>>>>>>> *[warn 2018/04/30 12:32:30.485 SAST <main> tid=0x1] Handshaker max
>>>>>>>>> Pool size: 4*
>>>>>>>>>
>>>>>>>>> *[info 2018/04/30 12:32:30.486 SAST <Cache Server Selector
>>>>>>>>> /0:0:0:0:0:0:0:0:40404 local port: 40404> tid=0x4f] SELECTOR enabled*
>>>>>>>>>
>>>>>>>>> *[info 2018/04/30 12:32:30.491 SAST <main> tid=0x1] CacheServer
>>>>>>>>> Configuration:   port=40404 max-connections=300 max-threads=200
>>>>>>>>> notify-by-subscription=true socket-buffer-size=65536
>>>>>>>>> maximum-time-between-pings=60000 maximum-message-count=230000
>>>>>>>>> message-time-to-live=180 eviction-policy=none capacity=1 overflow
>>>>>>>>> directory=. groups=[] loadProbe=ConnectionCountProbe loadPollInterval=5000
>>>>>>>>> tcpNoDelay=true*
>>>>>>>>>
>>>>>>>>> *server running on port 40404*
>>>>>>>>> *Press <Enter> to terminate the server*
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> Exception in thread "main" org.apache.geode.cache.client.ServerOperationException:
>>>>>>>>> remote server on pvz-dell(23128:loner):38042:2edf1c16:
>>>>>>>>> org.apache.geode.SerializationException: failed serializing object
>>>>>>>>> at org.apache.geode.cache.client.internal.OpExecutorImpl.handle
>>>>>>>>> Exception(OpExecutorImpl.java:669)
>>>>>>>>> at org.apache.geode.cache.client.internal.OpExecutorImpl.handle
>>>>>>>>> Exception(OpExecutorImpl.java:742)
>>>>>>>>> at org.apache.geode.cache.client.internal.OpExecutorImpl.handle
>>>>>>>>> Exception(OpExecutorImpl.java:611)
>>>>>>>>> at org.apache.geode.cache.client.internal.OpExecutorImpl.execut
>>>>>>>>> eOnServer(OpExecutorImpl.java:373)
>>>>>>>>> at org.apache.geode.cache.client.internal.OpExecutorImpl.execut
>>>>>>>>> eWithServerAffinity(OpExecutorImpl.java:220)
>>>>>>>>> at org.apache.geode.cache.client.internal.OpExecutorImpl.execut
>>>>>>>>> e(OpExecutorImpl.java:129)
>>>>>>>>> at org.apache.geode.cache.client.internal.OpExecutorImpl.execut
>>>>>>>>> e(OpExecutorImpl.java:116)
>>>>>>>>> at org.apache.geode.cache.client.internal.PoolImpl.execute(Pool
>>>>>>>>> Impl.java:774)
>>>>>>>>> at org.apache.geode.cache.client.internal.GetOp.execute(GetOp.j
>>>>>>>>> ava:91)
>>>>>>>>> at org.apache.geode.cache.client.internal.ServerRegionProxy.get
>>>>>>>>> (ServerRegionProxy.java:113)
>>>>>>>>> at org.apache.geode.internal.cache.tx.ClientTXRegionStub.findOb
>>>>>>>>> ject(ClientTXRegionStub.java:72)
>>>>>>>>> at org.apache.geode.internal.cache.TXStateStub.findObject(TXSta
>>>>>>>>> teStub.java:453)
>>>>>>>>> at org.apache.geode.internal.cache.TXStateProxyImpl.findObject(
>>>>>>>>> TXStateProxyImpl.java:496)
>>>>>>>>> at org.apache.geode.internal.cache.LocalRegion.get(LocalRegion.
>>>>>>>>> java:1366)
>>>>>>>>> at org.apache.geode.internal.cache.LocalRegion.get(LocalRegion.
>>>>>>>>> java:1300)
>>>>>>>>> at org.apache.geode.internal.cache.LocalRegion.get(LocalRegion.
>>>>>>>>> java:1285)
>>>>>>>>> at org.apache.geode.internal.cache.AbstractRegion.get(AbstractR
>>>>>>>>> egion.java:320)
>>>>>>>>> ......
>>>>>>>>> Caused by: org.apache.geode.SerializationException: failed
>>>>>>>>> serializing object
>>>>>>>>> at org.apache.geode.internal.cache.tier.sockets.Message.seriali
>>>>>>>>> zeAndAddPart(Message.java:399)
>>>>>>>>> at org.apache.geode.internal.cache.tier.sockets.Message.addPart
>>>>>>>>> InAnyForm(Message.java:360)
>>>>>>>>> at org.apache.geode.internal.cache.tier.sockets.command.Get70.w
>>>>>>>>> riteResponse(Get70.java:424)
>>>>>>>>> at org.apache.geode.internal.cache.tier.sockets.command.Get70.c
>>>>>>>>> mdExecute(Get70.java:211)
>>>>>>>>> at org.apache.geode.internal.cache.tier.sockets.BaseCommand.exe
>>>>>>>>> cute(BaseCommand.java:157)
>>>>>>>>> at org.apache.geode.internal.cache.tier.sockets.ServerConnectio
>>>>>>>>> n.doNormalMsg(ServerConnection.java:797)
>>>>>>>>> at org.apache.geode.internal.cache.tier.sockets.LegacyServerCon
>>>>>>>>> nection.doOneMessage(LegacyServerConnection.java:85)
>>>>>>>>> at org.apache.geode.internal.cache.tier.sockets.ServerConnectio
>>>>>>>>> n.run(ServerConnection.java:1148)
>>>>>>>>> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPool
>>>>>>>>> Executor.java:1149)
>>>>>>>>> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoo
>>>>>>>>> lExecutor.java:624)
>>>>>>>>> at org.apache.geode.internal.cache.tier.sockets.AcceptorImpl$4$
>>>>>>>>> 1.run(AcceptorImpl.java:641)
>>>>>>>>> at java.lang.Thread.run(Thread.java:748)
>>>>>>>>> *Caused by: java.io <http://java.io>.NotSerializableException:
>>>>>>>>> org.apache.geode.internal.cache.Token$NotAvailable*
>>>>>>>>> at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.j
>>>>>>>>> ava:1184)
>>>>>>>>> at java.io.ObjectOutputStream.writeObject(ObjectOutputStream.ja
>>>>>>>>> va:348)
>>>>>>>>> at org.apache.geode.internal.InternalDataSerializer.writeSerial
>>>>>>>>> izableObject(InternalDataSerializer.java:2341)
>>>>>>>>> at org.apache.geode.internal.InternalDataSerializer.basicWriteO
>>>>>>>>> bject(InternalDataSerializer.java:2216)
>>>>>>>>> at org.apache.geode.DataSerializer.writeObject(DataSerializer.j
>>>>>>>>> ava:2936)
>>>>>>>>> at org.apache.geode.internal.util.BlobHelper.serializeTo(BlobHe
>>>>>>>>> lper.java:66)
>>>>>>>>> at org.apache.geode.internal.cache.tier.sockets.Message.seriali
>>>>>>>>> zeAndAddPart(Message.java:397)
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> Kindly
>>>>>>>>> Pieter
>>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>
>

Re: NotSerializableException: org.apache.geode.internal.cache.Token$NotAvailable after server restart

Posted by Eric Shu <es...@pivotal.io>.
If you set TTL invalidate or destroy, data will be reflected in the
persistent layer as well. It is same as you perform a invalidate or destroy
on an entry.

The original issue has been fixed on 1.7 (see
https://issues.apache.org/jira/browse/GEODE-5173). Transaction will be
working with eviction overflow on restart.


On Tue, May 22, 2018 at 8:57 AM, Pieter van Zyl <pi...@lautus.net>
wrote:

> Hi guys,
>
> Just a question wrt to this topic.
>
> I can see the main issue has been fixed on 1.7.0 according to Jira....
>
> https://issues.apache.org/jira/browse/GEODE-5173
> Tried to get the snapshot but cannot get it to work. As it seems to only
> allow clients of version 1.5.0 and the spring-date-geode version still
> requires 1.6.0.
> But this is off-topic and another question for later today.
>
> In the mean time I have tried to use *Expiration* with a *persistent*
> region and *transactions*
>
> Currently we are trying to import data from our old database into Geode.
>
> So the region was:
>
>
>
>
>
> *<gfe:replicated-region id="ClassID-ClassName-LookUp"
>  disk-store-ref="tauDiskStore"                       persistent="true">
> <gfe:eviction type="HEAP_PERCENTAGE"
> action="OVERFLOW_TO_DISK"/></gfe:replicated-region>*
>
> *Changed to*
>
>
>
>
>
>
>
> *<gfe:replicated-region id="ClassName-ClassID-LookUp"
> disk-store-ref="tauDiskStore" statistics="true" persistent="true">
>  <gfe:region-ttl timeout="60"
> action="INVALIDATE"/></gfe:replicated-region><gfe:disk-store
> id="tauDiskStore">    <gfe:disk-dir
> location="geode/tauDiskStore"/></gfe:disk-store>*
>
>
> But after running the import and testing if we can read the data then all
> the data is there.
> But as soon as I restart the server and check again the data is not there.
>
> I would have thought that after TTL the data would be
> invalidated/destroyed in the in-memory region/cache but would still be on
> disk as this is a persistent region?
>
> I am I wrong to expect that this combination should still have ALL the
> data persisted on disk after a restart?
>
> https://geode.apache.org/docs/guide/11/developing/eviction/
> configuring_data_eviction.html
> https://geode.apache.org/docs/guide/11/developing/storing_
> data_on_disk/how_persist_overflow_work.html
>
>
> Kindly
> Pieter
>
> On Fri, May 4, 2018 at 7:32 PM, Anilkumar Gingade <ag...@pivotal.io>
> wrote:
>
>> Setting eviction overflow helps keeping the system running out-of memory
>> in critical situations. Its true for both persistent and non-persistent
>> region. In case of persistent region, if overflow is not set, the data is
>> both in-memory and disk.
>>
>> One way to handle the memory situation is through resource manager, but
>> if the system is under memory pressure, it may impact the system
>> performance.
>>
>> -Anil
>>
>>
>> On Fri, May 4, 2018 at 4:10 AM, Pieter van Zyl <pieter.van.zyl@lautus.net
>> > wrote:
>>
>>> Good day.
>>>
>>> Thanks again for all the feedback.
>>>
>>> I hope the bug will get sorted out.
>>>
>>> For now I have removed the eviction policies and there error is no more
>>> after a restart.
>>>
>>> I assume that if one uses persistent regions, then the eviction+overflow
>>> is not that critical as the data will be "backed" in the store/disk. One
>>> just need enough memory.
>>> Eviction+Overflow I suspect is quite critical when one has a full
>>> in-memory grid and running out of memory could cause issues if there is no
>>> overflow to disk?
>>>
>>> I am thinking that for now I could look at *expiration* rather on the
>>> region? To keep only *relevant* data in the in-memory regions for now
>>> to prevent running out of memory.
>>> Will try and keep data in memory for as long as possible.
>>>
>>> Currently we cannot remove the transactions that we use with the
>>> persistent regions. We might in the future.
>>>
>>> Kindly
>>> Pieter
>>>
>>>
>>> On Thu, May 3, 2018 at 1:16 AM, Dan Smith <ds...@pivotal.io> wrote:
>>>
>>>> > I assume this will happen on partitioned regions as well as the issue
>>>> is the combination of transactions on persistent regions and overflow.
>>>>
>>>> Unfortunately yes, this bug also affects partitioned regions
>>>>
>>>> > Also I see this bug is marked as *major* so is there any chance this
>>>> will be fixed in the next couple of months?
>>>>
>>>> I'm not sure. Geode is an open source project, so we don't really
>>>> promise fixes in any specific timeframe.
>>>>
>>>> > If I do change the region to not use overflow what will happen when
>>>> it reaches the "heap percentage"?
>>>>
>>>> The data will stay in memory. Oveflow lets you avoid running out of
>>>> memory by overflowing data to disk. Without that you could end up running
>>>> out of memory if your region gets to large.
>>>>
>>>> -Dan
>>>>
>>>> On Wed, May 2, 2018 at 2:07 PM, Pieter van Zyl <
>>>> pieter.van.zyl@lautus.net> wrote:
>>>>
>>>>> Hi Dan,
>>>>>
>>>>> Thanks for tracking this down!
>>>>>
>>>>> Much appreciated.
>>>>>
>>>>> This might also be why I didn't see it at first as we didn't activate
>>>>> the transactions on the persistent regions when we started with this
>>>>> evaluation.
>>>>>
>>>>> Based on this discussion
>>>>>
>>>>> https://markmail.org/message/jsabcdvyzsdrkvba?q=list:org%2Ea
>>>>> pache%2Egeode%2Euser+order:date-backward+pieter#query:list%3
>>>>> Aorg.apache.geode.user%20order%3Adate-backward%20pieter+page
>>>>> :1+mid:n25nznu7zur4xmar+state:results
>>>>>
>>>>> We are currently using  -Dgemfire.ALLOW_PERSISTENT_TRANSACTIONS=true
>>>>>
>>>>> Once we have the basics up and running we will still look at the
>>>>> TransactionWriter as recommended.
>>>>>
>>>>> We are currently trying to import our old data from Berkeley into
>>>>> Geode and for now I have one node locally with a replicated region.
>>>>> But we are planning to move to more nodes and partition/sharded
>>>>> regions.
>>>>>
>>>>> I assume this will happen on partitioned regions as well as the issue
>>>>> is the combination of transactions on persistent regions and overflow.
>>>>>
>>>>> Also I see this bug is marked as *major* so is there any chance this
>>>>> will be fixed in the next couple of months?
>>>>> Or is our use of transactions across persistent regions just to out of
>>>>> the norm?
>>>>>
>>>>> If I do change the region to not use overflow what will happen when it
>>>>> reaches the "heap percentage"?
>>>>>
>>>>> Kindly
>>>>> Pieter
>>>>>
>>>>>
>>>>>
>>>>> On Wed, May 2, 2018 at 10:14 PM, Dan Smith <ds...@pivotal.io> wrote:
>>>>>
>>>>>> I created GEODE-5173 for this issue.
>>>>>>
>>>>>> Thanks,
>>>>>> -Dan
>>>>>>
>>>>>> On Wed, May 2, 2018 at 12:17 PM, Dan Smith <ds...@pivotal.io> wrote:
>>>>>>
>>>>>>> Hi Pieter,
>>>>>>>
>>>>>>> I was able to reproduce this problem. It looks like it is an issue
>>>>>>> with doing a get inside of a transaction along with a replicated region
>>>>>>> using persistence and overflow. The value is still on disk, and for
>>>>>>> whatever reason if you do the get inside of a transaction it is returning
>>>>>>> you this bogus NOT_AVAILABLE token instead of reading the value off disk.
>>>>>>>
>>>>>>> I'll create a JIRA and attach my test. In the meantime, you could do
>>>>>>> the get outside of a transaction, or you could change your region to not
>>>>>>> use overflow. If you try changing the region to not use overflow, I think
>>>>>>> you'll also have to set the system property gemfire.disk.recoverValuesSync
>>>>>>> to true to make sure that in all cases you never have to read from disk.
>>>>>>>
>>>>>>> Thanks,
>>>>>>> -Dan
>>>>>>>
>>>>>>> On Mon, Apr 30, 2018 at 3:47 AM, Pieter van Zyl <
>>>>>>> pieter.van.zyl@lautus.net> wrote:
>>>>>>>
>>>>>>>> Good day.
>>>>>>>>
>>>>>>>> I am constantly seeing this error below when we stop and start
>>>>>>>> Geode server after a data import.
>>>>>>>>
>>>>>>>> When the client connects the second time after the restart we get NotSerializableException:
>>>>>>>> org.apache.geode.internal.cache.Token$NotAvailable
>>>>>>>>
>>>>>>>> Any ideas why we are getting this error or why it would state
>>>>>>>> "NotAvailable"?
>>>>>>>>
>>>>>>>> *Versions:*
>>>>>>>>
>>>>>>>> compile 'org.springframework.data:spring-data-geode:2.1.0.M2'
>>>>>>>> compile group: 'org.apache.geode', name: 'geode-core', version:
>>>>>>>> '1.5.0'
>>>>>>>>
>>>>>>>> Trying to access this region on startup:
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> *<gfe:replicated-region id="ClassID-ClassName-LookUp"
>>>>>>>>          disk-store-ref="tauDiskStore"
>>>>>>>>  persistent="true">    <gfe:eviction type="HEAP_PERCENTAGE"
>>>>>>>> action="OVERFLOW_TO_DISK"/></gfe:replicated-region>*
>>>>>>>>
>>>>>>>> *Server config:*
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> *<util:properties id="gemfire-props"><prop
>>>>>>>> key="log-level">info</prop><prop key="locators">pvz-dell[10334]</prop><prop
>>>>>>>> key="start-locator">pvz-dell[10334]</prop><prop
>>>>>>>> key="mcast-port">0</prop><prop key="http-service-port">0</prop><prop
>>>>>>>> key="jmx-manager">true</prop><prop key="jmx-manager-port">1099</prop><prop
>>>>>>>> key="jmx-manager-start">true</prop></util:properties>*
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> *<gfe:cache properties-ref="gemfire-props"
>>>>>>>> pdx-serializer-ref="pdxSerializer"
>>>>>>>> pdx-persistent="true"pdx-disk-store="pdx-disk-store" /><gfe:cache-server
>>>>>>>> port="40404" max-connections="300" socket-buffer-size="65536"
>>>>>>>> max-threads="200"/><gfe:transaction-manager id="txManager"/><bean
>>>>>>>> id="pdxSerializer"
>>>>>>>> class="org.rdb.geode.mapping.RDBGeodeSerializer"> <constructor-arg
>>>>>>>> value="org.rdb.*,net.lautus.*"/></bean>*
>>>>>>>>
>>>>>>>> The server seems to be up and running
>>>>>>>> *Cache server connection listener bound to address
>>>>>>>> pvz-dell-/0:0:0:0:0:0:0:0:40404 with backlog 1,000.*
>>>>>>>>
>>>>>>>> *[info 2018/04/30 12:32:30.483 SAST <main> tid=0x1]
>>>>>>>> ClientHealthMonitorThread maximum allowed time between pings: 60,000*
>>>>>>>>
>>>>>>>> *[warn 2018/04/30 12:32:30.485 SAST <main> tid=0x1] Handshaker max
>>>>>>>> Pool size: 4*
>>>>>>>>
>>>>>>>> *[info 2018/04/30 12:32:30.486 SAST <Cache Server Selector
>>>>>>>> /0:0:0:0:0:0:0:0:40404 local port: 40404> tid=0x4f] SELECTOR enabled*
>>>>>>>>
>>>>>>>> *[info 2018/04/30 12:32:30.491 SAST <main> tid=0x1] CacheServer
>>>>>>>> Configuration:   port=40404 max-connections=300 max-threads=200
>>>>>>>> notify-by-subscription=true socket-buffer-size=65536
>>>>>>>> maximum-time-between-pings=60000 maximum-message-count=230000
>>>>>>>> message-time-to-live=180 eviction-policy=none capacity=1 overflow
>>>>>>>> directory=. groups=[] loadProbe=ConnectionCountProbe loadPollInterval=5000
>>>>>>>> tcpNoDelay=true*
>>>>>>>>
>>>>>>>> *server running on port 40404*
>>>>>>>> *Press <Enter> to terminate the server*
>>>>>>>>
>>>>>>>>
>>>>>>>> Exception in thread "main" org.apache.geode.cache.client.ServerOperationException:
>>>>>>>> remote server on pvz-dell(23128:loner):38042:2edf1c16:
>>>>>>>> org.apache.geode.SerializationException: failed serializing object
>>>>>>>> at org.apache.geode.cache.client.internal.OpExecutorImpl.handle
>>>>>>>> Exception(OpExecutorImpl.java:669)
>>>>>>>> at org.apache.geode.cache.client.internal.OpExecutorImpl.handle
>>>>>>>> Exception(OpExecutorImpl.java:742)
>>>>>>>> at org.apache.geode.cache.client.internal.OpExecutorImpl.handle
>>>>>>>> Exception(OpExecutorImpl.java:611)
>>>>>>>> at org.apache.geode.cache.client.internal.OpExecutorImpl.execut
>>>>>>>> eOnServer(OpExecutorImpl.java:373)
>>>>>>>> at org.apache.geode.cache.client.internal.OpExecutorImpl.execut
>>>>>>>> eWithServerAffinity(OpExecutorImpl.java:220)
>>>>>>>> at org.apache.geode.cache.client.internal.OpExecutorImpl.execut
>>>>>>>> e(OpExecutorImpl.java:129)
>>>>>>>> at org.apache.geode.cache.client.internal.OpExecutorImpl.execut
>>>>>>>> e(OpExecutorImpl.java:116)
>>>>>>>> at org.apache.geode.cache.client.internal.PoolImpl.execute(Pool
>>>>>>>> Impl.java:774)
>>>>>>>> at org.apache.geode.cache.client.internal.GetOp.execute(GetOp.j
>>>>>>>> ava:91)
>>>>>>>> at org.apache.geode.cache.client.internal.ServerRegionProxy.get
>>>>>>>> (ServerRegionProxy.java:113)
>>>>>>>> at org.apache.geode.internal.cache.tx.ClientTXRegionStub.findOb
>>>>>>>> ject(ClientTXRegionStub.java:72)
>>>>>>>> at org.apache.geode.internal.cache.TXStateStub.findObject(TXSta
>>>>>>>> teStub.java:453)
>>>>>>>> at org.apache.geode.internal.cache.TXStateProxyImpl.findObject(
>>>>>>>> TXStateProxyImpl.java:496)
>>>>>>>> at org.apache.geode.internal.cache.LocalRegion.get(LocalRegion.
>>>>>>>> java:1366)
>>>>>>>> at org.apache.geode.internal.cache.LocalRegion.get(LocalRegion.
>>>>>>>> java:1300)
>>>>>>>> at org.apache.geode.internal.cache.LocalRegion.get(LocalRegion.
>>>>>>>> java:1285)
>>>>>>>> at org.apache.geode.internal.cache.AbstractRegion.get(AbstractR
>>>>>>>> egion.java:320)
>>>>>>>> ......
>>>>>>>> Caused by: org.apache.geode.SerializationException: failed
>>>>>>>> serializing object
>>>>>>>> at org.apache.geode.internal.cache.tier.sockets.Message.seriali
>>>>>>>> zeAndAddPart(Message.java:399)
>>>>>>>> at org.apache.geode.internal.cache.tier.sockets.Message.addPart
>>>>>>>> InAnyForm(Message.java:360)
>>>>>>>> at org.apache.geode.internal.cache.tier.sockets.command.Get70.w
>>>>>>>> riteResponse(Get70.java:424)
>>>>>>>> at org.apache.geode.internal.cache.tier.sockets.command.Get70.c
>>>>>>>> mdExecute(Get70.java:211)
>>>>>>>> at org.apache.geode.internal.cache.tier.sockets.BaseCommand.exe
>>>>>>>> cute(BaseCommand.java:157)
>>>>>>>> at org.apache.geode.internal.cache.tier.sockets.ServerConnectio
>>>>>>>> n.doNormalMsg(ServerConnection.java:797)
>>>>>>>> at org.apache.geode.internal.cache.tier.sockets.LegacyServerCon
>>>>>>>> nection.doOneMessage(LegacyServerConnection.java:85)
>>>>>>>> at org.apache.geode.internal.cache.tier.sockets.ServerConnectio
>>>>>>>> n.run(ServerConnection.java:1148)
>>>>>>>> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPool
>>>>>>>> Executor.java:1149)
>>>>>>>> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoo
>>>>>>>> lExecutor.java:624)
>>>>>>>> at org.apache.geode.internal.cache.tier.sockets.AcceptorImpl$4$
>>>>>>>> 1.run(AcceptorImpl.java:641)
>>>>>>>> at java.lang.Thread.run(Thread.java:748)
>>>>>>>> *Caused by: java.io <http://java.io>.NotSerializableException:
>>>>>>>> org.apache.geode.internal.cache.Token$NotAvailable*
>>>>>>>> at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.j
>>>>>>>> ava:1184)
>>>>>>>> at java.io.ObjectOutputStream.writeObject(ObjectOutputStream.ja
>>>>>>>> va:348)
>>>>>>>> at org.apache.geode.internal.InternalDataSerializer.writeSerial
>>>>>>>> izableObject(InternalDataSerializer.java:2341)
>>>>>>>> at org.apache.geode.internal.InternalDataSerializer.basicWriteO
>>>>>>>> bject(InternalDataSerializer.java:2216)
>>>>>>>> at org.apache.geode.DataSerializer.writeObject(DataSerializer.j
>>>>>>>> ava:2936)
>>>>>>>> at org.apache.geode.internal.util.BlobHelper.serializeTo(BlobHe
>>>>>>>> lper.java:66)
>>>>>>>> at org.apache.geode.internal.cache.tier.sockets.Message.seriali
>>>>>>>> zeAndAddPart(Message.java:397)
>>>>>>>>
>>>>>>>>
>>>>>>>> Kindly
>>>>>>>> Pieter
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>
>

Re: NotSerializableException: org.apache.geode.internal.cache.Token$NotAvailable after server restart

Posted by John Blum <jb...@pivotal.io>.
Pieter-

Regarding your (previously) serialization issue, see this...

https://issues.apache.org/jira/browse/GEODE-4822?focusedCommentId=16484267&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16484267

Regards,
-John


On Tue, May 22, 2018 at 8:57 AM, Pieter van Zyl <pi...@lautus.net>
wrote:

> Hi guys,
>
> Just a question wrt to this topic.
>
> I can see the main issue has been fixed on 1.7.0 according to Jira....
>
> https://issues.apache.org/jira/browse/GEODE-5173
> Tried to get the snapshot but cannot get it to work. As it seems to only
> allow clients of version 1.5.0 and the spring-date-geode version still
> requires 1.6.0.
> But this is off-topic and another question for later today.
>
> In the mean time I have tried to use *Expiration* with a *persistent*
> region and *transactions*
>
> Currently we are trying to import data from our old database into Geode.
>
> So the region was:
>
>
>
>
>
> *<gfe:replicated-region id="ClassID-ClassName-LookUp"
>  disk-store-ref="tauDiskStore"                       persistent="true">
> <gfe:eviction type="HEAP_PERCENTAGE"
> action="OVERFLOW_TO_DISK"/></gfe:replicated-region>*
>
> *Changed to*
>
>
>
>
>
>
>
> *<gfe:replicated-region id="ClassName-ClassID-LookUp"
> disk-store-ref="tauDiskStore" statistics="true" persistent="true">
>  <gfe:region-ttl timeout="60"
> action="INVALIDATE"/></gfe:replicated-region><gfe:disk-store
> id="tauDiskStore">    <gfe:disk-dir
> location="geode/tauDiskStore"/></gfe:disk-store>*
>
>
> But after running the import and testing if we can read the data then all
> the data is there.
> But as soon as I restart the server and check again the data is not there.
>
> I would have thought that after TTL the data would be
> invalidated/destroyed in the in-memory region/cache but would still be on
> disk as this is a persistent region?
>
> I am I wrong to expect that this combination should still have ALL the
> data persisted on disk after a restart?
>
> https://geode.apache.org/docs/guide/11/developing/eviction/
> configuring_data_eviction.html
> https://geode.apache.org/docs/guide/11/developing/storing_
> data_on_disk/how_persist_overflow_work.html
>
>
> Kindly
> Pieter
>
> On Fri, May 4, 2018 at 7:32 PM, Anilkumar Gingade <ag...@pivotal.io>
> wrote:
>
>> Setting eviction overflow helps keeping the system running out-of memory
>> in critical situations. Its true for both persistent and non-persistent
>> region. In case of persistent region, if overflow is not set, the data is
>> both in-memory and disk.
>>
>> One way to handle the memory situation is through resource manager, but
>> if the system is under memory pressure, it may impact the system
>> performance.
>>
>> -Anil
>>
>>
>> On Fri, May 4, 2018 at 4:10 AM, Pieter van Zyl <pieter.van.zyl@lautus.net
>> > wrote:
>>
>>> Good day.
>>>
>>> Thanks again for all the feedback.
>>>
>>> I hope the bug will get sorted out.
>>>
>>> For now I have removed the eviction policies and there error is no more
>>> after a restart.
>>>
>>> I assume that if one uses persistent regions, then the eviction+overflow
>>> is not that critical as the data will be "backed" in the store/disk. One
>>> just need enough memory.
>>> Eviction+Overflow I suspect is quite critical when one has a full
>>> in-memory grid and running out of memory could cause issues if there is no
>>> overflow to disk?
>>>
>>> I am thinking that for now I could look at *expiration* rather on the
>>> region? To keep only *relevant* data in the in-memory regions for now
>>> to prevent running out of memory.
>>> Will try and keep data in memory for as long as possible.
>>>
>>> Currently we cannot remove the transactions that we use with the
>>> persistent regions. We might in the future.
>>>
>>> Kindly
>>> Pieter
>>>
>>>
>>> On Thu, May 3, 2018 at 1:16 AM, Dan Smith <ds...@pivotal.io> wrote:
>>>
>>>> > I assume this will happen on partitioned regions as well as the issue
>>>> is the combination of transactions on persistent regions and overflow.
>>>>
>>>> Unfortunately yes, this bug also affects partitioned regions
>>>>
>>>> > Also I see this bug is marked as *major* so is there any chance this
>>>> will be fixed in the next couple of months?
>>>>
>>>> I'm not sure. Geode is an open source project, so we don't really
>>>> promise fixes in any specific timeframe.
>>>>
>>>> > If I do change the region to not use overflow what will happen when
>>>> it reaches the "heap percentage"?
>>>>
>>>> The data will stay in memory. Oveflow lets you avoid running out of
>>>> memory by overflowing data to disk. Without that you could end up running
>>>> out of memory if your region gets to large.
>>>>
>>>> -Dan
>>>>
>>>> On Wed, May 2, 2018 at 2:07 PM, Pieter van Zyl <
>>>> pieter.van.zyl@lautus.net> wrote:
>>>>
>>>>> Hi Dan,
>>>>>
>>>>> Thanks for tracking this down!
>>>>>
>>>>> Much appreciated.
>>>>>
>>>>> This might also be why I didn't see it at first as we didn't activate
>>>>> the transactions on the persistent regions when we started with this
>>>>> evaluation.
>>>>>
>>>>> Based on this discussion
>>>>>
>>>>> https://markmail.org/message/jsabcdvyzsdrkvba?q=list:org%2Ea
>>>>> pache%2Egeode%2Euser+order:date-backward+pieter#query:list%3
>>>>> Aorg.apache.geode.user%20order%3Adate-backward%20pieter+page
>>>>> :1+mid:n25nznu7zur4xmar+state:results
>>>>>
>>>>> We are currently using  -Dgemfire.ALLOW_PERSISTENT_TRANSACTIONS=true
>>>>>
>>>>> Once we have the basics up and running we will still look at the
>>>>> TransactionWriter as recommended.
>>>>>
>>>>> We are currently trying to import our old data from Berkeley into
>>>>> Geode and for now I have one node locally with a replicated region.
>>>>> But we are planning to move to more nodes and partition/sharded
>>>>> regions.
>>>>>
>>>>> I assume this will happen on partitioned regions as well as the issue
>>>>> is the combination of transactions on persistent regions and overflow.
>>>>>
>>>>> Also I see this bug is marked as *major* so is there any chance this
>>>>> will be fixed in the next couple of months?
>>>>> Or is our use of transactions across persistent regions just to out of
>>>>> the norm?
>>>>>
>>>>> If I do change the region to not use overflow what will happen when it
>>>>> reaches the "heap percentage"?
>>>>>
>>>>> Kindly
>>>>> Pieter
>>>>>
>>>>>
>>>>>
>>>>> On Wed, May 2, 2018 at 10:14 PM, Dan Smith <ds...@pivotal.io> wrote:
>>>>>
>>>>>> I created GEODE-5173 for this issue.
>>>>>>
>>>>>> Thanks,
>>>>>> -Dan
>>>>>>
>>>>>> On Wed, May 2, 2018 at 12:17 PM, Dan Smith <ds...@pivotal.io> wrote:
>>>>>>
>>>>>>> Hi Pieter,
>>>>>>>
>>>>>>> I was able to reproduce this problem. It looks like it is an issue
>>>>>>> with doing a get inside of a transaction along with a replicated region
>>>>>>> using persistence and overflow. The value is still on disk, and for
>>>>>>> whatever reason if you do the get inside of a transaction it is returning
>>>>>>> you this bogus NOT_AVAILABLE token instead of reading the value off disk.
>>>>>>>
>>>>>>> I'll create a JIRA and attach my test. In the meantime, you could do
>>>>>>> the get outside of a transaction, or you could change your region to not
>>>>>>> use overflow. If you try changing the region to not use overflow, I think
>>>>>>> you'll also have to set the system property gemfire.disk.recoverValuesSync
>>>>>>> to true to make sure that in all cases you never have to read from disk.
>>>>>>>
>>>>>>> Thanks,
>>>>>>> -Dan
>>>>>>>
>>>>>>> On Mon, Apr 30, 2018 at 3:47 AM, Pieter van Zyl <
>>>>>>> pieter.van.zyl@lautus.net> wrote:
>>>>>>>
>>>>>>>> Good day.
>>>>>>>>
>>>>>>>> I am constantly seeing this error below when we stop and start
>>>>>>>> Geode server after a data import.
>>>>>>>>
>>>>>>>> When the client connects the second time after the restart we get NotSerializableException:
>>>>>>>> org.apache.geode.internal.cache.Token$NotAvailable
>>>>>>>>
>>>>>>>> Any ideas why we are getting this error or why it would state
>>>>>>>> "NotAvailable"?
>>>>>>>>
>>>>>>>> *Versions:*
>>>>>>>>
>>>>>>>> compile 'org.springframework.data:spring-data-geode:2.1.0.M2'
>>>>>>>> compile group: 'org.apache.geode', name: 'geode-core', version:
>>>>>>>> '1.5.0'
>>>>>>>>
>>>>>>>> Trying to access this region on startup:
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> *<gfe:replicated-region id="ClassID-ClassName-LookUp"
>>>>>>>>          disk-store-ref="tauDiskStore"
>>>>>>>>  persistent="true">    <gfe:eviction type="HEAP_PERCENTAGE"
>>>>>>>> action="OVERFLOW_TO_DISK"/></gfe:replicated-region>*
>>>>>>>>
>>>>>>>> *Server config:*
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> *<util:properties id="gemfire-props"><prop
>>>>>>>> key="log-level">info</prop><prop key="locators">pvz-dell[10334]</prop><prop
>>>>>>>> key="start-locator">pvz-dell[10334]</prop><prop
>>>>>>>> key="mcast-port">0</prop><prop key="http-service-port">0</prop><prop
>>>>>>>> key="jmx-manager">true</prop><prop key="jmx-manager-port">1099</prop><prop
>>>>>>>> key="jmx-manager-start">true</prop></util:properties>*
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> *<gfe:cache properties-ref="gemfire-props"
>>>>>>>> pdx-serializer-ref="pdxSerializer"
>>>>>>>> pdx-persistent="true"pdx-disk-store="pdx-disk-store" /><gfe:cache-server
>>>>>>>> port="40404" max-connections="300" socket-buffer-size="65536"
>>>>>>>> max-threads="200"/><gfe:transaction-manager id="txManager"/><bean
>>>>>>>> id="pdxSerializer"
>>>>>>>> class="org.rdb.geode.mapping.RDBGeodeSerializer"> <constructor-arg
>>>>>>>> value="org.rdb.*,net.lautus.*"/></bean>*
>>>>>>>>
>>>>>>>> The server seems to be up and running
>>>>>>>> *Cache server connection listener bound to address
>>>>>>>> pvz-dell-/0:0:0:0:0:0:0:0:40404 with backlog 1,000.*
>>>>>>>>
>>>>>>>> *[info 2018/04/30 12:32:30.483 SAST <main> tid=0x1]
>>>>>>>> ClientHealthMonitorThread maximum allowed time between pings: 60,000*
>>>>>>>>
>>>>>>>> *[warn 2018/04/30 12:32:30.485 SAST <main> tid=0x1] Handshaker max
>>>>>>>> Pool size: 4*
>>>>>>>>
>>>>>>>> *[info 2018/04/30 12:32:30.486 SAST <Cache Server Selector
>>>>>>>> /0:0:0:0:0:0:0:0:40404 local port: 40404> tid=0x4f] SELECTOR enabled*
>>>>>>>>
>>>>>>>> *[info 2018/04/30 12:32:30.491 SAST <main> tid=0x1] CacheServer
>>>>>>>> Configuration:   port=40404 max-connections=300 max-threads=200
>>>>>>>> notify-by-subscription=true socket-buffer-size=65536
>>>>>>>> maximum-time-between-pings=60000 maximum-message-count=230000
>>>>>>>> message-time-to-live=180 eviction-policy=none capacity=1 overflow
>>>>>>>> directory=. groups=[] loadProbe=ConnectionCountProbe loadPollInterval=5000
>>>>>>>> tcpNoDelay=true*
>>>>>>>>
>>>>>>>> *server running on port 40404*
>>>>>>>> *Press <Enter> to terminate the server*
>>>>>>>>
>>>>>>>>
>>>>>>>> Exception in thread "main" org.apache.geode.cache.client.ServerOperationException:
>>>>>>>> remote server on pvz-dell(23128:loner):38042:2edf1c16:
>>>>>>>> org.apache.geode.SerializationException: failed serializing object
>>>>>>>> at org.apache.geode.cache.client.internal.OpExecutorImpl.handle
>>>>>>>> Exception(OpExecutorImpl.java:669)
>>>>>>>> at org.apache.geode.cache.client.internal.OpExecutorImpl.handle
>>>>>>>> Exception(OpExecutorImpl.java:742)
>>>>>>>> at org.apache.geode.cache.client.internal.OpExecutorImpl.handle
>>>>>>>> Exception(OpExecutorImpl.java:611)
>>>>>>>> at org.apache.geode.cache.client.internal.OpExecutorImpl.execut
>>>>>>>> eOnServer(OpExecutorImpl.java:373)
>>>>>>>> at org.apache.geode.cache.client.internal.OpExecutorImpl.execut
>>>>>>>> eWithServerAffinity(OpExecutorImpl.java:220)
>>>>>>>> at org.apache.geode.cache.client.internal.OpExecutorImpl.execut
>>>>>>>> e(OpExecutorImpl.java:129)
>>>>>>>> at org.apache.geode.cache.client.internal.OpExecutorImpl.execut
>>>>>>>> e(OpExecutorImpl.java:116)
>>>>>>>> at org.apache.geode.cache.client.internal.PoolImpl.execute(Pool
>>>>>>>> Impl.java:774)
>>>>>>>> at org.apache.geode.cache.client.internal.GetOp.execute(GetOp.j
>>>>>>>> ava:91)
>>>>>>>> at org.apache.geode.cache.client.internal.ServerRegionProxy.get
>>>>>>>> (ServerRegionProxy.java:113)
>>>>>>>> at org.apache.geode.internal.cache.tx.ClientTXRegionStub.findOb
>>>>>>>> ject(ClientTXRegionStub.java:72)
>>>>>>>> at org.apache.geode.internal.cache.TXStateStub.findObject(TXSta
>>>>>>>> teStub.java:453)
>>>>>>>> at org.apache.geode.internal.cache.TXStateProxyImpl.findObject(
>>>>>>>> TXStateProxyImpl.java:496)
>>>>>>>> at org.apache.geode.internal.cache.LocalRegion.get(LocalRegion.
>>>>>>>> java:1366)
>>>>>>>> at org.apache.geode.internal.cache.LocalRegion.get(LocalRegion.
>>>>>>>> java:1300)
>>>>>>>> at org.apache.geode.internal.cache.LocalRegion.get(LocalRegion.
>>>>>>>> java:1285)
>>>>>>>> at org.apache.geode.internal.cache.AbstractRegion.get(AbstractR
>>>>>>>> egion.java:320)
>>>>>>>> ......
>>>>>>>> Caused by: org.apache.geode.SerializationException: failed
>>>>>>>> serializing object
>>>>>>>> at org.apache.geode.internal.cache.tier.sockets.Message.seriali
>>>>>>>> zeAndAddPart(Message.java:399)
>>>>>>>> at org.apache.geode.internal.cache.tier.sockets.Message.addPart
>>>>>>>> InAnyForm(Message.java:360)
>>>>>>>> at org.apache.geode.internal.cache.tier.sockets.command.Get70.w
>>>>>>>> riteResponse(Get70.java:424)
>>>>>>>> at org.apache.geode.internal.cache.tier.sockets.command.Get70.c
>>>>>>>> mdExecute(Get70.java:211)
>>>>>>>> at org.apache.geode.internal.cache.tier.sockets.BaseCommand.exe
>>>>>>>> cute(BaseCommand.java:157)
>>>>>>>> at org.apache.geode.internal.cache.tier.sockets.ServerConnectio
>>>>>>>> n.doNormalMsg(ServerConnection.java:797)
>>>>>>>> at org.apache.geode.internal.cache.tier.sockets.LegacyServerCon
>>>>>>>> nection.doOneMessage(LegacyServerConnection.java:85)
>>>>>>>> at org.apache.geode.internal.cache.tier.sockets.ServerConnectio
>>>>>>>> n.run(ServerConnection.java:1148)
>>>>>>>> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPool
>>>>>>>> Executor.java:1149)
>>>>>>>> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoo
>>>>>>>> lExecutor.java:624)
>>>>>>>> at org.apache.geode.internal.cache.tier.sockets.AcceptorImpl$4$
>>>>>>>> 1.run(AcceptorImpl.java:641)
>>>>>>>> at java.lang.Thread.run(Thread.java:748)
>>>>>>>> *Caused by: java.io <http://java.io>.NotSerializableException:
>>>>>>>> org.apache.geode.internal.cache.Token$NotAvailable*
>>>>>>>> at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.j
>>>>>>>> ava:1184)
>>>>>>>> at java.io.ObjectOutputStream.writeObject(ObjectOutputStream.ja
>>>>>>>> va:348)
>>>>>>>> at org.apache.geode.internal.InternalDataSerializer.writeSerial
>>>>>>>> izableObject(InternalDataSerializer.java:2341)
>>>>>>>> at org.apache.geode.internal.InternalDataSerializer.basicWriteO
>>>>>>>> bject(InternalDataSerializer.java:2216)
>>>>>>>> at org.apache.geode.DataSerializer.writeObject(DataSerializer.j
>>>>>>>> ava:2936)
>>>>>>>> at org.apache.geode.internal.util.BlobHelper.serializeTo(BlobHe
>>>>>>>> lper.java:66)
>>>>>>>> at org.apache.geode.internal.cache.tier.sockets.Message.seriali
>>>>>>>> zeAndAddPart(Message.java:397)
>>>>>>>>
>>>>>>>>
>>>>>>>> Kindly
>>>>>>>> Pieter
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>
>


-- 
-John
john.blum10101 (skype)

Re: NotSerializableException: org.apache.geode.internal.cache.Token$NotAvailable after server restart

Posted by Pieter van Zyl <pi...@lautus.net>.
Hi guys,

Just a question wrt to this topic.

I can see the main issue has been fixed on 1.7.0 according to Jira....

https://issues.apache.org/jira/browse/GEODE-5173
Tried to get the snapshot but cannot get it to work. As it seems to only
allow clients of version 1.5.0 and the spring-date-geode version still
requires 1.6.0.
But this is off-topic and another question for later today.

In the mean time I have tried to use *Expiration* with a *persistent*
region and *transactions*

Currently we are trying to import data from our old database into Geode.

So the region was:





*<gfe:replicated-region id="ClassID-ClassName-LookUp"
 disk-store-ref="tauDiskStore"                       persistent="true">
<gfe:eviction type="HEAP_PERCENTAGE"
action="OVERFLOW_TO_DISK"/></gfe:replicated-region>*

*Changed to*







*<gfe:replicated-region id="ClassName-ClassID-LookUp"
disk-store-ref="tauDiskStore" statistics="true" persistent="true">
 <gfe:region-ttl timeout="60"
action="INVALIDATE"/></gfe:replicated-region><gfe:disk-store
id="tauDiskStore">    <gfe:disk-dir
location="geode/tauDiskStore"/></gfe:disk-store>*


But after running the import and testing if we can read the data then all
the data is there.
But as soon as I restart the server and check again the data is not there.

I would have thought that after TTL the data would be
invalidated/destroyed in the in-memory region/cache but would still be on
disk as this is a persistent region?

I am I wrong to expect that this combination should still have ALL the data
persisted on disk after a restart?

https://geode.apache.org/docs/guide/11/developing/eviction/configuring_data_eviction.html
https://geode.apache.org/docs/guide/11/developing/storing_data_on_disk/how_persist_overflow_work.html


Kindly
Pieter

On Fri, May 4, 2018 at 7:32 PM, Anilkumar Gingade <ag...@pivotal.io>
wrote:

> Setting eviction overflow helps keeping the system running out-of memory
> in critical situations. Its true for both persistent and non-persistent
> region. In case of persistent region, if overflow is not set, the data is
> both in-memory and disk.
>
> One way to handle the memory situation is through resource manager, but if
> the system is under memory pressure, it may impact the system performance.
>
> -Anil
>
>
> On Fri, May 4, 2018 at 4:10 AM, Pieter van Zyl <pi...@lautus.net>
> wrote:
>
>> Good day.
>>
>> Thanks again for all the feedback.
>>
>> I hope the bug will get sorted out.
>>
>> For now I have removed the eviction policies and there error is no more
>> after a restart.
>>
>> I assume that if one uses persistent regions, then the eviction+overflow
>> is not that critical as the data will be "backed" in the store/disk. One
>> just need enough memory.
>> Eviction+Overflow I suspect is quite critical when one has a full
>> in-memory grid and running out of memory could cause issues if there is no
>> overflow to disk?
>>
>> I am thinking that for now I could look at *expiration* rather on the
>> region? To keep only *relevant* data in the in-memory regions for now to
>> prevent running out of memory.
>> Will try and keep data in memory for as long as possible.
>>
>> Currently we cannot remove the transactions that we use with the
>> persistent regions. We might in the future.
>>
>> Kindly
>> Pieter
>>
>>
>> On Thu, May 3, 2018 at 1:16 AM, Dan Smith <ds...@pivotal.io> wrote:
>>
>>> > I assume this will happen on partitioned regions as well as the issue
>>> is the combination of transactions on persistent regions and overflow.
>>>
>>> Unfortunately yes, this bug also affects partitioned regions
>>>
>>> > Also I see this bug is marked as *major* so is there any chance this
>>> will be fixed in the next couple of months?
>>>
>>> I'm not sure. Geode is an open source project, so we don't really
>>> promise fixes in any specific timeframe.
>>>
>>> > If I do change the region to not use overflow what will happen when it
>>> reaches the "heap percentage"?
>>>
>>> The data will stay in memory. Oveflow lets you avoid running out of
>>> memory by overflowing data to disk. Without that you could end up running
>>> out of memory if your region gets to large.
>>>
>>> -Dan
>>>
>>> On Wed, May 2, 2018 at 2:07 PM, Pieter van Zyl <
>>> pieter.van.zyl@lautus.net> wrote:
>>>
>>>> Hi Dan,
>>>>
>>>> Thanks for tracking this down!
>>>>
>>>> Much appreciated.
>>>>
>>>> This might also be why I didn't see it at first as we didn't activate
>>>> the transactions on the persistent regions when we started with this
>>>> evaluation.
>>>>
>>>> Based on this discussion
>>>>
>>>> https://markmail.org/message/jsabcdvyzsdrkvba?q=list:org%2Ea
>>>> pache%2Egeode%2Euser+order:date-backward+pieter#query:list%3
>>>> Aorg.apache.geode.user%20order%3Adate-backward%20pieter+page
>>>> :1+mid:n25nznu7zur4xmar+state:results
>>>>
>>>> We are currently using  -Dgemfire.ALLOW_PERSISTENT_TRANSACTIONS=true
>>>>
>>>> Once we have the basics up and running we will still look at the
>>>> TransactionWriter as recommended.
>>>>
>>>> We are currently trying to import our old data from Berkeley into Geode
>>>> and for now I have one node locally with a replicated region.
>>>> But we are planning to move to more nodes and partition/sharded regions.
>>>>
>>>> I assume this will happen on partitioned regions as well as the issue
>>>> is the combination of transactions on persistent regions and overflow.
>>>>
>>>> Also I see this bug is marked as *major* so is there any chance this
>>>> will be fixed in the next couple of months?
>>>> Or is our use of transactions across persistent regions just to out of
>>>> the norm?
>>>>
>>>> If I do change the region to not use overflow what will happen when it
>>>> reaches the "heap percentage"?
>>>>
>>>> Kindly
>>>> Pieter
>>>>
>>>>
>>>>
>>>> On Wed, May 2, 2018 at 10:14 PM, Dan Smith <ds...@pivotal.io> wrote:
>>>>
>>>>> I created GEODE-5173 for this issue.
>>>>>
>>>>> Thanks,
>>>>> -Dan
>>>>>
>>>>> On Wed, May 2, 2018 at 12:17 PM, Dan Smith <ds...@pivotal.io> wrote:
>>>>>
>>>>>> Hi Pieter,
>>>>>>
>>>>>> I was able to reproduce this problem. It looks like it is an issue
>>>>>> with doing a get inside of a transaction along with a replicated region
>>>>>> using persistence and overflow. The value is still on disk, and for
>>>>>> whatever reason if you do the get inside of a transaction it is returning
>>>>>> you this bogus NOT_AVAILABLE token instead of reading the value off disk.
>>>>>>
>>>>>> I'll create a JIRA and attach my test. In the meantime, you could do
>>>>>> the get outside of a transaction, or you could change your region to not
>>>>>> use overflow. If you try changing the region to not use overflow, I think
>>>>>> you'll also have to set the system property gemfire.disk.recoverValuesSync
>>>>>> to true to make sure that in all cases you never have to read from disk.
>>>>>>
>>>>>> Thanks,
>>>>>> -Dan
>>>>>>
>>>>>> On Mon, Apr 30, 2018 at 3:47 AM, Pieter van Zyl <
>>>>>> pieter.van.zyl@lautus.net> wrote:
>>>>>>
>>>>>>> Good day.
>>>>>>>
>>>>>>> I am constantly seeing this error below when we stop and start Geode
>>>>>>> server after a data import.
>>>>>>>
>>>>>>> When the client connects the second time after the restart we get NotSerializableException:
>>>>>>> org.apache.geode.internal.cache.Token$NotAvailable
>>>>>>>
>>>>>>> Any ideas why we are getting this error or why it would state
>>>>>>> "NotAvailable"?
>>>>>>>
>>>>>>> *Versions:*
>>>>>>>
>>>>>>> compile 'org.springframework.data:spring-data-geode:2.1.0.M2'
>>>>>>> compile group: 'org.apache.geode', name: 'geode-core', version:
>>>>>>> '1.5.0'
>>>>>>>
>>>>>>> Trying to access this region on startup:
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> *<gfe:replicated-region id="ClassID-ClassName-LookUp"
>>>>>>>        disk-store-ref="tauDiskStore"
>>>>>>>  persistent="true">    <gfe:eviction type="HEAP_PERCENTAGE"
>>>>>>> action="OVERFLOW_TO_DISK"/></gfe:replicated-region>*
>>>>>>>
>>>>>>> *Server config:*
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> *<util:properties id="gemfire-props"><prop
>>>>>>> key="log-level">info</prop><prop key="locators">pvz-dell[10334]</prop><prop
>>>>>>> key="start-locator">pvz-dell[10334]</prop><prop
>>>>>>> key="mcast-port">0</prop><prop key="http-service-port">0</prop><prop
>>>>>>> key="jmx-manager">true</prop><prop key="jmx-manager-port">1099</prop><prop
>>>>>>> key="jmx-manager-start">true</prop></util:properties>*
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> *<gfe:cache properties-ref="gemfire-props"
>>>>>>> pdx-serializer-ref="pdxSerializer"
>>>>>>> pdx-persistent="true"pdx-disk-store="pdx-disk-store" /><gfe:cache-server
>>>>>>> port="40404" max-connections="300" socket-buffer-size="65536"
>>>>>>> max-threads="200"/><gfe:transaction-manager id="txManager"/><bean
>>>>>>> id="pdxSerializer"
>>>>>>> class="org.rdb.geode.mapping.RDBGeodeSerializer"> <constructor-arg
>>>>>>> value="org.rdb.*,net.lautus.*"/></bean>*
>>>>>>>
>>>>>>> The server seems to be up and running
>>>>>>> *Cache server connection listener bound to address
>>>>>>> pvz-dell-/0:0:0:0:0:0:0:0:40404 with backlog 1,000.*
>>>>>>>
>>>>>>> *[info 2018/04/30 12:32:30.483 SAST <main> tid=0x1]
>>>>>>> ClientHealthMonitorThread maximum allowed time between pings: 60,000*
>>>>>>>
>>>>>>> *[warn 2018/04/30 12:32:30.485 SAST <main> tid=0x1] Handshaker max
>>>>>>> Pool size: 4*
>>>>>>>
>>>>>>> *[info 2018/04/30 12:32:30.486 SAST <Cache Server Selector
>>>>>>> /0:0:0:0:0:0:0:0:40404 local port: 40404> tid=0x4f] SELECTOR enabled*
>>>>>>>
>>>>>>> *[info 2018/04/30 12:32:30.491 SAST <main> tid=0x1] CacheServer
>>>>>>> Configuration:   port=40404 max-connections=300 max-threads=200
>>>>>>> notify-by-subscription=true socket-buffer-size=65536
>>>>>>> maximum-time-between-pings=60000 maximum-message-count=230000
>>>>>>> message-time-to-live=180 eviction-policy=none capacity=1 overflow
>>>>>>> directory=. groups=[] loadProbe=ConnectionCountProbe loadPollInterval=5000
>>>>>>> tcpNoDelay=true*
>>>>>>>
>>>>>>> *server running on port 40404*
>>>>>>> *Press <Enter> to terminate the server*
>>>>>>>
>>>>>>>
>>>>>>> Exception in thread "main" org.apache.geode.cache.client.ServerOperationException:
>>>>>>> remote server on pvz-dell(23128:loner):38042:2edf1c16:
>>>>>>> org.apache.geode.SerializationException: failed serializing object
>>>>>>> at org.apache.geode.cache.client.internal.OpExecutorImpl.handle
>>>>>>> Exception(OpExecutorImpl.java:669)
>>>>>>> at org.apache.geode.cache.client.internal.OpExecutorImpl.handle
>>>>>>> Exception(OpExecutorImpl.java:742)
>>>>>>> at org.apache.geode.cache.client.internal.OpExecutorImpl.handle
>>>>>>> Exception(OpExecutorImpl.java:611)
>>>>>>> at org.apache.geode.cache.client.internal.OpExecutorImpl.execut
>>>>>>> eOnServer(OpExecutorImpl.java:373)
>>>>>>> at org.apache.geode.cache.client.internal.OpExecutorImpl.execut
>>>>>>> eWithServerAffinity(OpExecutorImpl.java:220)
>>>>>>> at org.apache.geode.cache.client.internal.OpExecutorImpl.execut
>>>>>>> e(OpExecutorImpl.java:129)
>>>>>>> at org.apache.geode.cache.client.internal.OpExecutorImpl.execut
>>>>>>> e(OpExecutorImpl.java:116)
>>>>>>> at org.apache.geode.cache.client.internal.PoolImpl.execute(Pool
>>>>>>> Impl.java:774)
>>>>>>> at org.apache.geode.cache.client.internal.GetOp.execute(GetOp.j
>>>>>>> ava:91)
>>>>>>> at org.apache.geode.cache.client.internal.ServerRegionProxy.get
>>>>>>> (ServerRegionProxy.java:113)
>>>>>>> at org.apache.geode.internal.cache.tx.ClientTXRegionStub.findOb
>>>>>>> ject(ClientTXRegionStub.java:72)
>>>>>>> at org.apache.geode.internal.cache.TXStateStub.findObject(TXSta
>>>>>>> teStub.java:453)
>>>>>>> at org.apache.geode.internal.cache.TXStateProxyImpl.findObject(
>>>>>>> TXStateProxyImpl.java:496)
>>>>>>> at org.apache.geode.internal.cache.LocalRegion.get(LocalRegion.
>>>>>>> java:1366)
>>>>>>> at org.apache.geode.internal.cache.LocalRegion.get(LocalRegion.
>>>>>>> java:1300)
>>>>>>> at org.apache.geode.internal.cache.LocalRegion.get(LocalRegion.
>>>>>>> java:1285)
>>>>>>> at org.apache.geode.internal.cache.AbstractRegion.get(AbstractR
>>>>>>> egion.java:320)
>>>>>>> ......
>>>>>>> Caused by: org.apache.geode.SerializationException: failed
>>>>>>> serializing object
>>>>>>> at org.apache.geode.internal.cache.tier.sockets.Message.seriali
>>>>>>> zeAndAddPart(Message.java:399)
>>>>>>> at org.apache.geode.internal.cache.tier.sockets.Message.addPart
>>>>>>> InAnyForm(Message.java:360)
>>>>>>> at org.apache.geode.internal.cache.tier.sockets.command.Get70.w
>>>>>>> riteResponse(Get70.java:424)
>>>>>>> at org.apache.geode.internal.cache.tier.sockets.command.Get70.c
>>>>>>> mdExecute(Get70.java:211)
>>>>>>> at org.apache.geode.internal.cache.tier.sockets.BaseCommand.exe
>>>>>>> cute(BaseCommand.java:157)
>>>>>>> at org.apache.geode.internal.cache.tier.sockets.ServerConnectio
>>>>>>> n.doNormalMsg(ServerConnection.java:797)
>>>>>>> at org.apache.geode.internal.cache.tier.sockets.LegacyServerCon
>>>>>>> nection.doOneMessage(LegacyServerConnection.java:85)
>>>>>>> at org.apache.geode.internal.cache.tier.sockets.ServerConnectio
>>>>>>> n.run(ServerConnection.java:1148)
>>>>>>> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPool
>>>>>>> Executor.java:1149)
>>>>>>> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoo
>>>>>>> lExecutor.java:624)
>>>>>>> at org.apache.geode.internal.cache.tier.sockets.AcceptorImpl$4$
>>>>>>> 1.run(AcceptorImpl.java:641)
>>>>>>> at java.lang.Thread.run(Thread.java:748)
>>>>>>> *Caused by: java.io <http://java.io>.NotSerializableException:
>>>>>>> org.apache.geode.internal.cache.Token$NotAvailable*
>>>>>>> at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.j
>>>>>>> ava:1184)
>>>>>>> at java.io.ObjectOutputStream.writeObject(ObjectOutputStream.ja
>>>>>>> va:348)
>>>>>>> at org.apache.geode.internal.InternalDataSerializer.writeSerial
>>>>>>> izableObject(InternalDataSerializer.java:2341)
>>>>>>> at org.apache.geode.internal.InternalDataSerializer.basicWriteO
>>>>>>> bject(InternalDataSerializer.java:2216)
>>>>>>> at org.apache.geode.DataSerializer.writeObject(DataSerializer.j
>>>>>>> ava:2936)
>>>>>>> at org.apache.geode.internal.util.BlobHelper.serializeTo(BlobHe
>>>>>>> lper.java:66)
>>>>>>> at org.apache.geode.internal.cache.tier.sockets.Message.seriali
>>>>>>> zeAndAddPart(Message.java:397)
>>>>>>>
>>>>>>>
>>>>>>> Kindly
>>>>>>> Pieter
>>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>
>

Re: NotSerializableException: org.apache.geode.internal.cache.Token$NotAvailable after server restart

Posted by Anilkumar Gingade <ag...@pivotal.io>.
Setting eviction overflow helps keeping the system running out-of memory in
critical situations. Its true for both persistent and non-persistent
region. In case of persistent region, if overflow is not set, the data is
both in-memory and disk.

One way to handle the memory situation is through resource manager, but if
the system is under memory pressure, it may impact the system performance.

-Anil


On Fri, May 4, 2018 at 4:10 AM, Pieter van Zyl <pi...@lautus.net>
wrote:

> Good day.
>
> Thanks again for all the feedback.
>
> I hope the bug will get sorted out.
>
> For now I have removed the eviction policies and there error is no more
> after a restart.
>
> I assume that if one uses persistent regions, then the eviction+overflow
> is not that critical as the data will be "backed" in the store/disk. One
> just need enough memory.
> Eviction+Overflow I suspect is quite critical when one has a full
> in-memory grid and running out of memory could cause issues if there is no
> overflow to disk?
>
> I am thinking that for now I could look at *expiration* rather on the
> region? To keep only *relevant* data in the in-memory regions for now to
> prevent running out of memory.
> Will try and keep data in memory for as long as possible.
>
> Currently we cannot remove the transactions that we use with the
> persistent regions. We might in the future.
>
> Kindly
> Pieter
>
>
> On Thu, May 3, 2018 at 1:16 AM, Dan Smith <ds...@pivotal.io> wrote:
>
>> > I assume this will happen on partitioned regions as well as the issue
>> is the combination of transactions on persistent regions and overflow.
>>
>> Unfortunately yes, this bug also affects partitioned regions
>>
>> > Also I see this bug is marked as *major* so is there any chance this
>> will be fixed in the next couple of months?
>>
>> I'm not sure. Geode is an open source project, so we don't really promise
>> fixes in any specific timeframe.
>>
>> > If I do change the region to not use overflow what will happen when it
>> reaches the "heap percentage"?
>>
>> The data will stay in memory. Oveflow lets you avoid running out of
>> memory by overflowing data to disk. Without that you could end up running
>> out of memory if your region gets to large.
>>
>> -Dan
>>
>> On Wed, May 2, 2018 at 2:07 PM, Pieter van Zyl <pieter.van.zyl@lautus.net
>> > wrote:
>>
>>> Hi Dan,
>>>
>>> Thanks for tracking this down!
>>>
>>> Much appreciated.
>>>
>>> This might also be why I didn't see it at first as we didn't activate
>>> the transactions on the persistent regions when we started with this
>>> evaluation.
>>>
>>> Based on this discussion
>>>
>>> https://markmail.org/message/jsabcdvyzsdrkvba?q=list:org%2Ea
>>> pache%2Egeode%2Euser+order:date-backward+pieter#query:list%3
>>> Aorg.apache.geode.user%20order%3Adate-backward%20pieter+page
>>> :1+mid:n25nznu7zur4xmar+state:results
>>>
>>> We are currently using  -Dgemfire.ALLOW_PERSISTENT_TRANSACTIONS=true
>>>
>>> Once we have the basics up and running we will still look at the
>>> TransactionWriter as recommended.
>>>
>>> We are currently trying to import our old data from Berkeley into Geode
>>> and for now I have one node locally with a replicated region.
>>> But we are planning to move to more nodes and partition/sharded regions.
>>>
>>> I assume this will happen on partitioned regions as well as the issue is
>>> the combination of transactions on persistent regions and overflow.
>>>
>>> Also I see this bug is marked as *major* so is there any chance this
>>> will be fixed in the next couple of months?
>>> Or is our use of transactions across persistent regions just to out of
>>> the norm?
>>>
>>> If I do change the region to not use overflow what will happen when it
>>> reaches the "heap percentage"?
>>>
>>> Kindly
>>> Pieter
>>>
>>>
>>>
>>> On Wed, May 2, 2018 at 10:14 PM, Dan Smith <ds...@pivotal.io> wrote:
>>>
>>>> I created GEODE-5173 for this issue.
>>>>
>>>> Thanks,
>>>> -Dan
>>>>
>>>> On Wed, May 2, 2018 at 12:17 PM, Dan Smith <ds...@pivotal.io> wrote:
>>>>
>>>>> Hi Pieter,
>>>>>
>>>>> I was able to reproduce this problem. It looks like it is an issue
>>>>> with doing a get inside of a transaction along with a replicated region
>>>>> using persistence and overflow. The value is still on disk, and for
>>>>> whatever reason if you do the get inside of a transaction it is returning
>>>>> you this bogus NOT_AVAILABLE token instead of reading the value off disk.
>>>>>
>>>>> I'll create a JIRA and attach my test. In the meantime, you could do
>>>>> the get outside of a transaction, or you could change your region to not
>>>>> use overflow. If you try changing the region to not use overflow, I think
>>>>> you'll also have to set the system property gemfire.disk.recoverValuesSync
>>>>> to true to make sure that in all cases you never have to read from disk.
>>>>>
>>>>> Thanks,
>>>>> -Dan
>>>>>
>>>>> On Mon, Apr 30, 2018 at 3:47 AM, Pieter van Zyl <
>>>>> pieter.van.zyl@lautus.net> wrote:
>>>>>
>>>>>> Good day.
>>>>>>
>>>>>> I am constantly seeing this error below when we stop and start Geode
>>>>>> server after a data import.
>>>>>>
>>>>>> When the client connects the second time after the restart we get NotSerializableException:
>>>>>> org.apache.geode.internal.cache.Token$NotAvailable
>>>>>>
>>>>>> Any ideas why we are getting this error or why it would state
>>>>>> "NotAvailable"?
>>>>>>
>>>>>> *Versions:*
>>>>>>
>>>>>> compile 'org.springframework.data:spring-data-geode:2.1.0.M2'
>>>>>> compile group: 'org.apache.geode', name: 'geode-core', version:
>>>>>> '1.5.0'
>>>>>>
>>>>>> Trying to access this region on startup:
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> *<gfe:replicated-region id="ClassID-ClassName-LookUp"
>>>>>>        disk-store-ref="tauDiskStore"
>>>>>>  persistent="true">    <gfe:eviction type="HEAP_PERCENTAGE"
>>>>>> action="OVERFLOW_TO_DISK"/></gfe:replicated-region>*
>>>>>>
>>>>>> *Server config:*
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> *<util:properties id="gemfire-props"><prop
>>>>>> key="log-level">info</prop><prop key="locators">pvz-dell[10334]</prop><prop
>>>>>> key="start-locator">pvz-dell[10334]</prop><prop
>>>>>> key="mcast-port">0</prop><prop key="http-service-port">0</prop><prop
>>>>>> key="jmx-manager">true</prop><prop key="jmx-manager-port">1099</prop><prop
>>>>>> key="jmx-manager-start">true</prop></util:properties>*
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> *<gfe:cache properties-ref="gemfire-props"
>>>>>> pdx-serializer-ref="pdxSerializer"
>>>>>> pdx-persistent="true"pdx-disk-store="pdx-disk-store" /><gfe:cache-server
>>>>>> port="40404" max-connections="300" socket-buffer-size="65536"
>>>>>> max-threads="200"/><gfe:transaction-manager id="txManager"/><bean
>>>>>> id="pdxSerializer"
>>>>>> class="org.rdb.geode.mapping.RDBGeodeSerializer"> <constructor-arg
>>>>>> value="org.rdb.*,net.lautus.*"/></bean>*
>>>>>>
>>>>>> The server seems to be up and running
>>>>>> *Cache server connection listener bound to address
>>>>>> pvz-dell-/0:0:0:0:0:0:0:0:40404 with backlog 1,000.*
>>>>>>
>>>>>> *[info 2018/04/30 12:32:30.483 SAST <main> tid=0x1]
>>>>>> ClientHealthMonitorThread maximum allowed time between pings: 60,000*
>>>>>>
>>>>>> *[warn 2018/04/30 12:32:30.485 SAST <main> tid=0x1] Handshaker max
>>>>>> Pool size: 4*
>>>>>>
>>>>>> *[info 2018/04/30 12:32:30.486 SAST <Cache Server Selector
>>>>>> /0:0:0:0:0:0:0:0:40404 local port: 40404> tid=0x4f] SELECTOR enabled*
>>>>>>
>>>>>> *[info 2018/04/30 12:32:30.491 SAST <main> tid=0x1] CacheServer
>>>>>> Configuration:   port=40404 max-connections=300 max-threads=200
>>>>>> notify-by-subscription=true socket-buffer-size=65536
>>>>>> maximum-time-between-pings=60000 maximum-message-count=230000
>>>>>> message-time-to-live=180 eviction-policy=none capacity=1 overflow
>>>>>> directory=. groups=[] loadProbe=ConnectionCountProbe loadPollInterval=5000
>>>>>> tcpNoDelay=true*
>>>>>>
>>>>>> *server running on port 40404*
>>>>>> *Press <Enter> to terminate the server*
>>>>>>
>>>>>>
>>>>>> Exception in thread "main" org.apache.geode.cache.client.ServerOperationException:
>>>>>> remote server on pvz-dell(23128:loner):38042:2edf1c16:
>>>>>> org.apache.geode.SerializationException: failed serializing object
>>>>>> at org.apache.geode.cache.client.internal.OpExecutorImpl.handle
>>>>>> Exception(OpExecutorImpl.java:669)
>>>>>> at org.apache.geode.cache.client.internal.OpExecutorImpl.handle
>>>>>> Exception(OpExecutorImpl.java:742)
>>>>>> at org.apache.geode.cache.client.internal.OpExecutorImpl.handle
>>>>>> Exception(OpExecutorImpl.java:611)
>>>>>> at org.apache.geode.cache.client.internal.OpExecutorImpl.execut
>>>>>> eOnServer(OpExecutorImpl.java:373)
>>>>>> at org.apache.geode.cache.client.internal.OpExecutorImpl.execut
>>>>>> eWithServerAffinity(OpExecutorImpl.java:220)
>>>>>> at org.apache.geode.cache.client.internal.OpExecutorImpl.execut
>>>>>> e(OpExecutorImpl.java:129)
>>>>>> at org.apache.geode.cache.client.internal.OpExecutorImpl.execut
>>>>>> e(OpExecutorImpl.java:116)
>>>>>> at org.apache.geode.cache.client.internal.PoolImpl.execute(Pool
>>>>>> Impl.java:774)
>>>>>> at org.apache.geode.cache.client.internal.GetOp.execute(GetOp.j
>>>>>> ava:91)
>>>>>> at org.apache.geode.cache.client.internal.ServerRegionProxy.get
>>>>>> (ServerRegionProxy.java:113)
>>>>>> at org.apache.geode.internal.cache.tx.ClientTXRegionStub.findOb
>>>>>> ject(ClientTXRegionStub.java:72)
>>>>>> at org.apache.geode.internal.cache.TXStateStub.findObject(TXSta
>>>>>> teStub.java:453)
>>>>>> at org.apache.geode.internal.cache.TXStateProxyImpl.findObject(
>>>>>> TXStateProxyImpl.java:496)
>>>>>> at org.apache.geode.internal.cache.LocalRegion.get(LocalRegion.
>>>>>> java:1366)
>>>>>> at org.apache.geode.internal.cache.LocalRegion.get(LocalRegion.
>>>>>> java:1300)
>>>>>> at org.apache.geode.internal.cache.LocalRegion.get(LocalRegion.
>>>>>> java:1285)
>>>>>> at org.apache.geode.internal.cache.AbstractRegion.get(AbstractR
>>>>>> egion.java:320)
>>>>>> ......
>>>>>> Caused by: org.apache.geode.SerializationException: failed
>>>>>> serializing object
>>>>>> at org.apache.geode.internal.cache.tier.sockets.Message.seriali
>>>>>> zeAndAddPart(Message.java:399)
>>>>>> at org.apache.geode.internal.cache.tier.sockets.Message.addPart
>>>>>> InAnyForm(Message.java:360)
>>>>>> at org.apache.geode.internal.cache.tier.sockets.command.Get70.w
>>>>>> riteResponse(Get70.java:424)
>>>>>> at org.apache.geode.internal.cache.tier.sockets.command.Get70.c
>>>>>> mdExecute(Get70.java:211)
>>>>>> at org.apache.geode.internal.cache.tier.sockets.BaseCommand.exe
>>>>>> cute(BaseCommand.java:157)
>>>>>> at org.apache.geode.internal.cache.tier.sockets.ServerConnectio
>>>>>> n.doNormalMsg(ServerConnection.java:797)
>>>>>> at org.apache.geode.internal.cache.tier.sockets.LegacyServerCon
>>>>>> nection.doOneMessage(LegacyServerConnection.java:85)
>>>>>> at org.apache.geode.internal.cache.tier.sockets.ServerConnectio
>>>>>> n.run(ServerConnection.java:1148)
>>>>>> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPool
>>>>>> Executor.java:1149)
>>>>>> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoo
>>>>>> lExecutor.java:624)
>>>>>> at org.apache.geode.internal.cache.tier.sockets.AcceptorImpl$4$
>>>>>> 1.run(AcceptorImpl.java:641)
>>>>>> at java.lang.Thread.run(Thread.java:748)
>>>>>> *Caused by: java.io <http://java.io>.NotSerializableException:
>>>>>> org.apache.geode.internal.cache.Token$NotAvailable*
>>>>>> at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.j
>>>>>> ava:1184)
>>>>>> at java.io.ObjectOutputStream.writeObject(ObjectOutputStream.ja
>>>>>> va:348)
>>>>>> at org.apache.geode.internal.InternalDataSerializer.writeSerial
>>>>>> izableObject(InternalDataSerializer.java:2341)
>>>>>> at org.apache.geode.internal.InternalDataSerializer.basicWriteO
>>>>>> bject(InternalDataSerializer.java:2216)
>>>>>> at org.apache.geode.DataSerializer.writeObject(DataSerializer.j
>>>>>> ava:2936)
>>>>>> at org.apache.geode.internal.util.BlobHelper.serializeTo(BlobHe
>>>>>> lper.java:66)
>>>>>> at org.apache.geode.internal.cache.tier.sockets.Message.seriali
>>>>>> zeAndAddPart(Message.java:397)
>>>>>>
>>>>>>
>>>>>> Kindly
>>>>>> Pieter
>>>>>>
>>>>>
>>>>>
>>>>
>>>
>>
>

Re: NotSerializableException: org.apache.geode.internal.cache.Token$NotAvailable after server restart

Posted by Pieter van Zyl <pi...@lautus.net>.
Good day.

Thanks again for all the feedback.

I hope the bug will get sorted out.

For now I have removed the eviction policies and there error is no more
after a restart.

I assume that if one uses persistent regions, then the eviction+overflow is
not that critical as the data will be "backed" in the store/disk. One just
need enough memory.
Eviction+Overflow I suspect is quite critical when one has a full in-memory
grid and running out of memory could cause issues if there is no overflow
to disk?

I am thinking that for now I could look at *expiration* rather on the
region? To keep only *relevant* data in the in-memory regions for now to
prevent running out of memory.
Will try and keep data in memory for as long as possible.

Currently we cannot remove the transactions that we use with the persistent
regions. We might in the future.

Kindly
Pieter


On Thu, May 3, 2018 at 1:16 AM, Dan Smith <ds...@pivotal.io> wrote:

> > I assume this will happen on partitioned regions as well as the issue is
> the combination of transactions on persistent regions and overflow.
>
> Unfortunately yes, this bug also affects partitioned regions
>
> > Also I see this bug is marked as *major* so is there any chance this
> will be fixed in the next couple of months?
>
> I'm not sure. Geode is an open source project, so we don't really promise
> fixes in any specific timeframe.
>
> > If I do change the region to not use overflow what will happen when it
> reaches the "heap percentage"?
>
> The data will stay in memory. Oveflow lets you avoid running out of memory
> by overflowing data to disk. Without that you could end up running out of
> memory if your region gets to large.
>
> -Dan
>
> On Wed, May 2, 2018 at 2:07 PM, Pieter van Zyl <pi...@lautus.net>
> wrote:
>
>> Hi Dan,
>>
>> Thanks for tracking this down!
>>
>> Much appreciated.
>>
>> This might also be why I didn't see it at first as we didn't activate the
>> transactions on the persistent regions when we started with this evaluation.
>>
>> Based on this discussion
>>
>> https://markmail.org/message/jsabcdvyzsdrkvba?q=list:org%2Ea
>> pache%2Egeode%2Euser+order:date-backward+pieter#query:list%
>> 3Aorg.apache.geode.user%20order%3Adate-backward%20pieter+
>> page:1+mid:n25nznu7zur4xmar+state:results
>>
>> We are currently using  -Dgemfire.ALLOW_PERSISTENT_TRANSACTIONS=true
>>
>> Once we have the basics up and running we will still look at the
>> TransactionWriter as recommended.
>>
>> We are currently trying to import our old data from Berkeley into Geode
>> and for now I have one node locally with a replicated region.
>> But we are planning to move to more nodes and partition/sharded regions.
>>
>> I assume this will happen on partitioned regions as well as the issue is
>> the combination of transactions on persistent regions and overflow.
>>
>> Also I see this bug is marked as *major* so is there any chance this
>> will be fixed in the next couple of months?
>> Or is our use of transactions across persistent regions just to out of
>> the norm?
>>
>> If I do change the region to not use overflow what will happen when it
>> reaches the "heap percentage"?
>>
>> Kindly
>> Pieter
>>
>>
>>
>> On Wed, May 2, 2018 at 10:14 PM, Dan Smith <ds...@pivotal.io> wrote:
>>
>>> I created GEODE-5173 for this issue.
>>>
>>> Thanks,
>>> -Dan
>>>
>>> On Wed, May 2, 2018 at 12:17 PM, Dan Smith <ds...@pivotal.io> wrote:
>>>
>>>> Hi Pieter,
>>>>
>>>> I was able to reproduce this problem. It looks like it is an issue with
>>>> doing a get inside of a transaction along with a replicated region using
>>>> persistence and overflow. The value is still on disk, and for whatever
>>>> reason if you do the get inside of a transaction it is returning you this
>>>> bogus NOT_AVAILABLE token instead of reading the value off disk.
>>>>
>>>> I'll create a JIRA and attach my test. In the meantime, you could do
>>>> the get outside of a transaction, or you could change your region to not
>>>> use overflow. If you try changing the region to not use overflow, I think
>>>> you'll also have to set the system property gemfire.disk.recoverValuesSync
>>>> to true to make sure that in all cases you never have to read from disk.
>>>>
>>>> Thanks,
>>>> -Dan
>>>>
>>>> On Mon, Apr 30, 2018 at 3:47 AM, Pieter van Zyl <
>>>> pieter.van.zyl@lautus.net> wrote:
>>>>
>>>>> Good day.
>>>>>
>>>>> I am constantly seeing this error below when we stop and start Geode
>>>>> server after a data import.
>>>>>
>>>>> When the client connects the second time after the restart we get NotSerializableException:
>>>>> org.apache.geode.internal.cache.Token$NotAvailable
>>>>>
>>>>> Any ideas why we are getting this error or why it would state
>>>>> "NotAvailable"?
>>>>>
>>>>> *Versions:*
>>>>>
>>>>> compile 'org.springframework.data:spring-data-geode:2.1.0.M2'
>>>>> compile group: 'org.apache.geode', name: 'geode-core', version: '1.5.0'
>>>>>
>>>>> Trying to access this region on startup:
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> *<gfe:replicated-region id="ClassID-ClassName-LookUp"
>>>>>      disk-store-ref="tauDiskStore"
>>>>>  persistent="true">    <gfe:eviction type="HEAP_PERCENTAGE"
>>>>> action="OVERFLOW_TO_DISK"/></gfe:replicated-region>*
>>>>>
>>>>> *Server config:*
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> *<util:properties id="gemfire-props"><prop
>>>>> key="log-level">info</prop><prop key="locators">pvz-dell[10334]</prop><prop
>>>>> key="start-locator">pvz-dell[10334]</prop><prop
>>>>> key="mcast-port">0</prop><prop key="http-service-port">0</prop><prop
>>>>> key="jmx-manager">true</prop><prop key="jmx-manager-port">1099</prop><prop
>>>>> key="jmx-manager-start">true</prop></util:properties>*
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> *<gfe:cache properties-ref="gemfire-props"
>>>>> pdx-serializer-ref="pdxSerializer"
>>>>> pdx-persistent="true"pdx-disk-store="pdx-disk-store" /><gfe:cache-server
>>>>> port="40404" max-connections="300" socket-buffer-size="65536"
>>>>> max-threads="200"/><gfe:transaction-manager id="txManager"/><bean
>>>>> id="pdxSerializer"
>>>>> class="org.rdb.geode.mapping.RDBGeodeSerializer"> <constructor-arg
>>>>> value="org.rdb.*,net.lautus.*"/></bean>*
>>>>>
>>>>> The server seems to be up and running
>>>>> *Cache server connection listener bound to address
>>>>> pvz-dell-/0:0:0:0:0:0:0:0:40404 with backlog 1,000.*
>>>>>
>>>>> *[info 2018/04/30 12:32:30.483 SAST <main> tid=0x1]
>>>>> ClientHealthMonitorThread maximum allowed time between pings: 60,000*
>>>>>
>>>>> *[warn 2018/04/30 12:32:30.485 SAST <main> tid=0x1] Handshaker max
>>>>> Pool size: 4*
>>>>>
>>>>> *[info 2018/04/30 12:32:30.486 SAST <Cache Server Selector
>>>>> /0:0:0:0:0:0:0:0:40404 local port: 40404> tid=0x4f] SELECTOR enabled*
>>>>>
>>>>> *[info 2018/04/30 12:32:30.491 SAST <main> tid=0x1] CacheServer
>>>>> Configuration:   port=40404 max-connections=300 max-threads=200
>>>>> notify-by-subscription=true socket-buffer-size=65536
>>>>> maximum-time-between-pings=60000 maximum-message-count=230000
>>>>> message-time-to-live=180 eviction-policy=none capacity=1 overflow
>>>>> directory=. groups=[] loadProbe=ConnectionCountProbe loadPollInterval=5000
>>>>> tcpNoDelay=true*
>>>>>
>>>>> *server running on port 40404*
>>>>> *Press <Enter> to terminate the server*
>>>>>
>>>>>
>>>>> Exception in thread "main" org.apache.geode.cache.client.ServerOperationException:
>>>>> remote server on pvz-dell(23128:loner):38042:2edf1c16:
>>>>> org.apache.geode.SerializationException: failed serializing object
>>>>> at org.apache.geode.cache.client.internal.OpExecutorImpl.handle
>>>>> Exception(OpExecutorImpl.java:669)
>>>>> at org.apache.geode.cache.client.internal.OpExecutorImpl.handle
>>>>> Exception(OpExecutorImpl.java:742)
>>>>> at org.apache.geode.cache.client.internal.OpExecutorImpl.handle
>>>>> Exception(OpExecutorImpl.java:611)
>>>>> at org.apache.geode.cache.client.internal.OpExecutorImpl.execut
>>>>> eOnServer(OpExecutorImpl.java:373)
>>>>> at org.apache.geode.cache.client.internal.OpExecutorImpl.execut
>>>>> eWithServerAffinity(OpExecutorImpl.java:220)
>>>>> at org.apache.geode.cache.client.internal.OpExecutorImpl.execut
>>>>> e(OpExecutorImpl.java:129)
>>>>> at org.apache.geode.cache.client.internal.OpExecutorImpl.execut
>>>>> e(OpExecutorImpl.java:116)
>>>>> at org.apache.geode.cache.client.internal.PoolImpl.execute(Pool
>>>>> Impl.java:774)
>>>>> at org.apache.geode.cache.client.internal.GetOp.execute(GetOp.java:91)
>>>>> at org.apache.geode.cache.client.internal.ServerRegionProxy.get
>>>>> (ServerRegionProxy.java:113)
>>>>> at org.apache.geode.internal.cache.tx.ClientTXRegionStub.findOb
>>>>> ject(ClientTXRegionStub.java:72)
>>>>> at org.apache.geode.internal.cache.TXStateStub.findObject(TXSta
>>>>> teStub.java:453)
>>>>> at org.apache.geode.internal.cache.TXStateProxyImpl.findObject(
>>>>> TXStateProxyImpl.java:496)
>>>>> at org.apache.geode.internal.cache.LocalRegion.get(LocalRegion.
>>>>> java:1366)
>>>>> at org.apache.geode.internal.cache.LocalRegion.get(LocalRegion.
>>>>> java:1300)
>>>>> at org.apache.geode.internal.cache.LocalRegion.get(LocalRegion.
>>>>> java:1285)
>>>>> at org.apache.geode.internal.cache.AbstractRegion.get(AbstractR
>>>>> egion.java:320)
>>>>> ......
>>>>> Caused by: org.apache.geode.SerializationException: failed
>>>>> serializing object
>>>>> at org.apache.geode.internal.cache.tier.sockets.Message.seriali
>>>>> zeAndAddPart(Message.java:399)
>>>>> at org.apache.geode.internal.cache.tier.sockets.Message.addPart
>>>>> InAnyForm(Message.java:360)
>>>>> at org.apache.geode.internal.cache.tier.sockets.command.Get70.w
>>>>> riteResponse(Get70.java:424)
>>>>> at org.apache.geode.internal.cache.tier.sockets.command.Get70.c
>>>>> mdExecute(Get70.java:211)
>>>>> at org.apache.geode.internal.cache.tier.sockets.BaseCommand.exe
>>>>> cute(BaseCommand.java:157)
>>>>> at org.apache.geode.internal.cache.tier.sockets.ServerConnectio
>>>>> n.doNormalMsg(ServerConnection.java:797)
>>>>> at org.apache.geode.internal.cache.tier.sockets.LegacyServerCon
>>>>> nection.doOneMessage(LegacyServerConnection.java:85)
>>>>> at org.apache.geode.internal.cache.tier.sockets.ServerConnectio
>>>>> n.run(ServerConnection.java:1148)
>>>>> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPool
>>>>> Executor.java:1149)
>>>>> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoo
>>>>> lExecutor.java:624)
>>>>> at org.apache.geode.internal.cache.tier.sockets.AcceptorImpl$4$
>>>>> 1.run(AcceptorImpl.java:641)
>>>>> at java.lang.Thread.run(Thread.java:748)
>>>>> *Caused by: java.io <http://java.io>.NotSerializableException:
>>>>> org.apache.geode.internal.cache.Token$NotAvailable*
>>>>> at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.j
>>>>> ava:1184)
>>>>> at java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:348)
>>>>> at org.apache.geode.internal.InternalDataSerializer.writeSerial
>>>>> izableObject(InternalDataSerializer.java:2341)
>>>>> at org.apache.geode.internal.InternalDataSerializer.basicWriteO
>>>>> bject(InternalDataSerializer.java:2216)
>>>>> at org.apache.geode.DataSerializer.writeObject(DataSerializer.j
>>>>> ava:2936)
>>>>> at org.apache.geode.internal.util.BlobHelper.serializeTo(BlobHe
>>>>> lper.java:66)
>>>>> at org.apache.geode.internal.cache.tier.sockets.Message.seriali
>>>>> zeAndAddPart(Message.java:397)
>>>>>
>>>>>
>>>>> Kindly
>>>>> Pieter
>>>>>
>>>>
>>>>
>>>
>>
>

Re: NotSerializableException: org.apache.geode.internal.cache.Token$NotAvailable after server restart

Posted by Dan Smith <ds...@pivotal.io>.
> I assume this will happen on partitioned regions as well as the issue is
the combination of transactions on persistent regions and overflow.

Unfortunately yes, this bug also affects partitioned regions

> Also I see this bug is marked as *major* so is there any chance this will
be fixed in the next couple of months?

I'm not sure. Geode is an open source project, so we don't really promise
fixes in any specific timeframe.

> If I do change the region to not use overflow what will happen when it
reaches the "heap percentage"?

The data will stay in memory. Oveflow lets you avoid running out of memory
by overflowing data to disk. Without that you could end up running out of
memory if your region gets to large.

-Dan

On Wed, May 2, 2018 at 2:07 PM, Pieter van Zyl <pi...@lautus.net>
wrote:

> Hi Dan,
>
> Thanks for tracking this down!
>
> Much appreciated.
>
> This might also be why I didn't see it at first as we didn't activate the
> transactions on the persistent regions when we started with this evaluation.
>
> Based on this discussion
>
> https://markmail.org/message/jsabcdvyzsdrkvba?q=list:org%
> 2Eapache%2Egeode%2Euser+order:date-backward+pieter#query:
> list%3Aorg.apache.geode.user%20order%3Adate-backward%20pieter+page:1+mid:
> n25nznu7zur4xmar+state:results
>
> We are currently using  -Dgemfire.ALLOW_PERSISTENT_TRANSACTIONS=true
>
> Once we have the basics up and running we will still look at the
> TransactionWriter as recommended.
>
> We are currently trying to import our old data from Berkeley into Geode
> and for now I have one node locally with a replicated region.
> But we are planning to move to more nodes and partition/sharded regions.
>
> I assume this will happen on partitioned regions as well as the issue is
> the combination of transactions on persistent regions and overflow.
>
> Also I see this bug is marked as *major* so is there any chance this will
> be fixed in the next couple of months?
> Or is our use of transactions across persistent regions just to out of the
> norm?
>
> If I do change the region to not use overflow what will happen when it
> reaches the "heap percentage"?
>
> Kindly
> Pieter
>
>
>
> On Wed, May 2, 2018 at 10:14 PM, Dan Smith <ds...@pivotal.io> wrote:
>
>> I created GEODE-5173 for this issue.
>>
>> Thanks,
>> -Dan
>>
>> On Wed, May 2, 2018 at 12:17 PM, Dan Smith <ds...@pivotal.io> wrote:
>>
>>> Hi Pieter,
>>>
>>> I was able to reproduce this problem. It looks like it is an issue with
>>> doing a get inside of a transaction along with a replicated region using
>>> persistence and overflow. The value is still on disk, and for whatever
>>> reason if you do the get inside of a transaction it is returning you this
>>> bogus NOT_AVAILABLE token instead of reading the value off disk.
>>>
>>> I'll create a JIRA and attach my test. In the meantime, you could do the
>>> get outside of a transaction, or you could change your region to not use
>>> overflow. If you try changing the region to not use overflow, I think
>>> you'll also have to set the system property gemfire.disk.recoverValuesSync
>>> to true to make sure that in all cases you never have to read from disk.
>>>
>>> Thanks,
>>> -Dan
>>>
>>> On Mon, Apr 30, 2018 at 3:47 AM, Pieter van Zyl <
>>> pieter.van.zyl@lautus.net> wrote:
>>>
>>>> Good day.
>>>>
>>>> I am constantly seeing this error below when we stop and start Geode
>>>> server after a data import.
>>>>
>>>> When the client connects the second time after the restart we get NotSerializableException:
>>>> org.apache.geode.internal.cache.Token$NotAvailable
>>>>
>>>> Any ideas why we are getting this error or why it would state
>>>> "NotAvailable"?
>>>>
>>>> *Versions:*
>>>>
>>>> compile 'org.springframework.data:spring-data-geode:2.1.0.M2'
>>>> compile group: 'org.apache.geode', name: 'geode-core', version: '1.5.0'
>>>>
>>>> Trying to access this region on startup:
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> *<gfe:replicated-region id="ClassID-ClassName-LookUp"
>>>>      disk-store-ref="tauDiskStore"
>>>>  persistent="true">    <gfe:eviction type="HEAP_PERCENTAGE"
>>>> action="OVERFLOW_TO_DISK"/></gfe:replicated-region>*
>>>>
>>>> *Server config:*
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> *<util:properties id="gemfire-props"><prop
>>>> key="log-level">info</prop><prop key="locators">pvz-dell[10334]</prop><prop
>>>> key="start-locator">pvz-dell[10334]</prop><prop
>>>> key="mcast-port">0</prop><prop key="http-service-port">0</prop><prop
>>>> key="jmx-manager">true</prop><prop key="jmx-manager-port">1099</prop><prop
>>>> key="jmx-manager-start">true</prop></util:properties>*
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> *<gfe:cache properties-ref="gemfire-props"
>>>> pdx-serializer-ref="pdxSerializer"
>>>> pdx-persistent="true"pdx-disk-store="pdx-disk-store" /><gfe:cache-server
>>>> port="40404" max-connections="300" socket-buffer-size="65536"
>>>> max-threads="200"/><gfe:transaction-manager id="txManager"/><bean
>>>> id="pdxSerializer"
>>>> class="org.rdb.geode.mapping.RDBGeodeSerializer"> <constructor-arg
>>>> value="org.rdb.*,net.lautus.*"/></bean>*
>>>>
>>>> The server seems to be up and running
>>>> *Cache server connection listener bound to address
>>>> pvz-dell-/0:0:0:0:0:0:0:0:40404 with backlog 1,000.*
>>>>
>>>> *[info 2018/04/30 12:32:30.483 SAST <main> tid=0x1]
>>>> ClientHealthMonitorThread maximum allowed time between pings: 60,000*
>>>>
>>>> *[warn 2018/04/30 12:32:30.485 SAST <main> tid=0x1] Handshaker max Pool
>>>> size: 4*
>>>>
>>>> *[info 2018/04/30 12:32:30.486 SAST <Cache Server Selector
>>>> /0:0:0:0:0:0:0:0:40404 local port: 40404> tid=0x4f] SELECTOR enabled*
>>>>
>>>> *[info 2018/04/30 12:32:30.491 SAST <main> tid=0x1] CacheServer
>>>> Configuration:   port=40404 max-connections=300 max-threads=200
>>>> notify-by-subscription=true socket-buffer-size=65536
>>>> maximum-time-between-pings=60000 maximum-message-count=230000
>>>> message-time-to-live=180 eviction-policy=none capacity=1 overflow
>>>> directory=. groups=[] loadProbe=ConnectionCountProbe loadPollInterval=5000
>>>> tcpNoDelay=true*
>>>>
>>>> *server running on port 40404*
>>>> *Press <Enter> to terminate the server*
>>>>
>>>>
>>>> Exception in thread "main" org.apache.geode.cache.client.ServerOperationException:
>>>> remote server on pvz-dell(23128:loner):38042:2edf1c16:
>>>> org.apache.geode.SerializationException: failed serializing object
>>>> at org.apache.geode.cache.client.internal.OpExecutorImpl.handle
>>>> Exception(OpExecutorImpl.java:669)
>>>> at org.apache.geode.cache.client.internal.OpExecutorImpl.handle
>>>> Exception(OpExecutorImpl.java:742)
>>>> at org.apache.geode.cache.client.internal.OpExecutorImpl.handle
>>>> Exception(OpExecutorImpl.java:611)
>>>> at org.apache.geode.cache.client.internal.OpExecutorImpl.execut
>>>> eOnServer(OpExecutorImpl.java:373)
>>>> at org.apache.geode.cache.client.internal.OpExecutorImpl.execut
>>>> eWithServerAffinity(OpExecutorImpl.java:220)
>>>> at org.apache.geode.cache.client.internal.OpExecutorImpl.execut
>>>> e(OpExecutorImpl.java:129)
>>>> at org.apache.geode.cache.client.internal.OpExecutorImpl.execut
>>>> e(OpExecutorImpl.java:116)
>>>> at org.apache.geode.cache.client.internal.PoolImpl.execute(Pool
>>>> Impl.java:774)
>>>> at org.apache.geode.cache.client.internal.GetOp.execute(GetOp.java:91)
>>>> at org.apache.geode.cache.client.internal.ServerRegionProxy.get
>>>> (ServerRegionProxy.java:113)
>>>> at org.apache.geode.internal.cache.tx.ClientTXRegionStub.findOb
>>>> ject(ClientTXRegionStub.java:72)
>>>> at org.apache.geode.internal.cache.TXStateStub.findObject(TXSta
>>>> teStub.java:453)
>>>> at org.apache.geode.internal.cache.TXStateProxyImpl.findObject(
>>>> TXStateProxyImpl.java:496)
>>>> at org.apache.geode.internal.cache.LocalRegion.get(LocalRegion.
>>>> java:1366)
>>>> at org.apache.geode.internal.cache.LocalRegion.get(LocalRegion.
>>>> java:1300)
>>>> at org.apache.geode.internal.cache.LocalRegion.get(LocalRegion.
>>>> java:1285)
>>>> at org.apache.geode.internal.cache.AbstractRegion.get(AbstractR
>>>> egion.java:320)
>>>> ......
>>>> Caused by: org.apache.geode.SerializationException: failed serializing
>>>> object
>>>> at org.apache.geode.internal.cache.tier.sockets.Message.seriali
>>>> zeAndAddPart(Message.java:399)
>>>> at org.apache.geode.internal.cache.tier.sockets.Message.addPart
>>>> InAnyForm(Message.java:360)
>>>> at org.apache.geode.internal.cache.tier.sockets.command.Get70.w
>>>> riteResponse(Get70.java:424)
>>>> at org.apache.geode.internal.cache.tier.sockets.command.Get70.c
>>>> mdExecute(Get70.java:211)
>>>> at org.apache.geode.internal.cache.tier.sockets.BaseCommand.exe
>>>> cute(BaseCommand.java:157)
>>>> at org.apache.geode.internal.cache.tier.sockets.ServerConnectio
>>>> n.doNormalMsg(ServerConnection.java:797)
>>>> at org.apache.geode.internal.cache.tier.sockets.LegacyServerCon
>>>> nection.doOneMessage(LegacyServerConnection.java:85)
>>>> at org.apache.geode.internal.cache.tier.sockets.ServerConnectio
>>>> n.run(ServerConnection.java:1148)
>>>> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPool
>>>> Executor.java:1149)
>>>> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoo
>>>> lExecutor.java:624)
>>>> at org.apache.geode.internal.cache.tier.sockets.AcceptorImpl$4$
>>>> 1.run(AcceptorImpl.java:641)
>>>> at java.lang.Thread.run(Thread.java:748)
>>>> *Caused by: java.io <http://java.io>.NotSerializableException:
>>>> org.apache.geode.internal.cache.Token$NotAvailable*
>>>> at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.j
>>>> ava:1184)
>>>> at java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:348)
>>>> at org.apache.geode.internal.InternalDataSerializer.writeSerial
>>>> izableObject(InternalDataSerializer.java:2341)
>>>> at org.apache.geode.internal.InternalDataSerializer.basicWriteO
>>>> bject(InternalDataSerializer.java:2216)
>>>> at org.apache.geode.DataSerializer.writeObject(DataSerializer.j
>>>> ava:2936)
>>>> at org.apache.geode.internal.util.BlobHelper.serializeTo(BlobHe
>>>> lper.java:66)
>>>> at org.apache.geode.internal.cache.tier.sockets.Message.seriali
>>>> zeAndAddPart(Message.java:397)
>>>>
>>>>
>>>> Kindly
>>>> Pieter
>>>>
>>>
>>>
>>
>

Re: NotSerializableException: org.apache.geode.internal.cache.Token$NotAvailable after server restart

Posted by Pieter van Zyl <pi...@lautus.net>.
Hi Dan,

Thanks for tracking this down!

Much appreciated.

This might also be why I didn't see it at first as we didn't activate the
transactions on the persistent regions when we started with this evaluation.

Based on this discussion

https://markmail.org/message/jsabcdvyzsdrkvba?q=list:org%2Eapache%2Egeode%2Euser+order:date-backward+pieter#query:list%3Aorg.apache.geode.user%20order%3Adate-backward%20pieter+page:1+mid:n25nznu7zur4xmar+state:results

We are currently using  -Dgemfire.ALLOW_PERSISTENT_TRANSACTIONS=true

Once we have the basics up and running we will still look at the
TransactionWriter as recommended.

We are currently trying to import our old data from Berkeley into Geode and
for now I have one node locally with a replicated region.
But we are planning to move to more nodes and partition/sharded regions.

I assume this will happen on partitioned regions as well as the issue is
the combination of transactions on persistent regions and overflow.

Also I see this bug is marked as *major* so is there any chance this will
be fixed in the next couple of months?
Or is our use of transactions across persistent regions just to out of the
norm?

If I do change the region to not use overflow what will happen when it
reaches the "heap percentage"?

Kindly
Pieter



On Wed, May 2, 2018 at 10:14 PM, Dan Smith <ds...@pivotal.io> wrote:

> I created GEODE-5173 for this issue.
>
> Thanks,
> -Dan
>
> On Wed, May 2, 2018 at 12:17 PM, Dan Smith <ds...@pivotal.io> wrote:
>
>> Hi Pieter,
>>
>> I was able to reproduce this problem. It looks like it is an issue with
>> doing a get inside of a transaction along with a replicated region using
>> persistence and overflow. The value is still on disk, and for whatever
>> reason if you do the get inside of a transaction it is returning you this
>> bogus NOT_AVAILABLE token instead of reading the value off disk.
>>
>> I'll create a JIRA and attach my test. In the meantime, you could do the
>> get outside of a transaction, or you could change your region to not use
>> overflow. If you try changing the region to not use overflow, I think
>> you'll also have to set the system property gemfire.disk.recoverValuesSync
>> to true to make sure that in all cases you never have to read from disk.
>>
>> Thanks,
>> -Dan
>>
>> On Mon, Apr 30, 2018 at 3:47 AM, Pieter van Zyl <
>> pieter.van.zyl@lautus.net> wrote:
>>
>>> Good day.
>>>
>>> I am constantly seeing this error below when we stop and start Geode
>>> server after a data import.
>>>
>>> When the client connects the second time after the restart we get NotSerializableException:
>>> org.apache.geode.internal.cache.Token$NotAvailable
>>>
>>> Any ideas why we are getting this error or why it would state
>>> "NotAvailable"?
>>>
>>> *Versions:*
>>>
>>> compile 'org.springframework.data:spring-data-geode:2.1.0.M2'
>>> compile group: 'org.apache.geode', name: 'geode-core', version: '1.5.0'
>>>
>>> Trying to access this region on startup:
>>>
>>>
>>>
>>>
>>>
>>> *<gfe:replicated-region id="ClassID-ClassName-LookUp"
>>>    disk-store-ref="tauDiskStore"                       persistent="true">
>>>   <gfe:eviction type="HEAP_PERCENTAGE"
>>> action="OVERFLOW_TO_DISK"/></gfe:replicated-region>*
>>>
>>> *Server config:*
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> *<util:properties id="gemfire-props"><prop
>>> key="log-level">info</prop><prop key="locators">pvz-dell[10334]</prop><prop
>>> key="start-locator">pvz-dell[10334]</prop><prop
>>> key="mcast-port">0</prop><prop key="http-service-port">0</prop><prop
>>> key="jmx-manager">true</prop><prop key="jmx-manager-port">1099</prop><prop
>>> key="jmx-manager-start">true</prop></util:properties>*
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> *<gfe:cache properties-ref="gemfire-props"
>>> pdx-serializer-ref="pdxSerializer"
>>> pdx-persistent="true"pdx-disk-store="pdx-disk-store" /><gfe:cache-server
>>> port="40404" max-connections="300" socket-buffer-size="65536"
>>> max-threads="200"/><gfe:transaction-manager id="txManager"/><bean
>>> id="pdxSerializer"
>>> class="org.rdb.geode.mapping.RDBGeodeSerializer"> <constructor-arg
>>> value="org.rdb.*,net.lautus.*"/></bean>*
>>>
>>> The server seems to be up and running
>>> *Cache server connection listener bound to address
>>> pvz-dell-/0:0:0:0:0:0:0:0:40404 with backlog 1,000.*
>>>
>>> *[info 2018/04/30 12:32:30.483 SAST <main> tid=0x1]
>>> ClientHealthMonitorThread maximum allowed time between pings: 60,000*
>>>
>>> *[warn 2018/04/30 12:32:30.485 SAST <main> tid=0x1] Handshaker max Pool
>>> size: 4*
>>>
>>> *[info 2018/04/30 12:32:30.486 SAST <Cache Server Selector
>>> /0:0:0:0:0:0:0:0:40404 local port: 40404> tid=0x4f] SELECTOR enabled*
>>>
>>> *[info 2018/04/30 12:32:30.491 SAST <main> tid=0x1] CacheServer
>>> Configuration:   port=40404 max-connections=300 max-threads=200
>>> notify-by-subscription=true socket-buffer-size=65536
>>> maximum-time-between-pings=60000 maximum-message-count=230000
>>> message-time-to-live=180 eviction-policy=none capacity=1 overflow
>>> directory=. groups=[] loadProbe=ConnectionCountProbe loadPollInterval=5000
>>> tcpNoDelay=true*
>>>
>>> *server running on port 40404*
>>> *Press <Enter> to terminate the server*
>>>
>>>
>>> Exception in thread "main" org.apache.geode.cache.client.ServerOperationException:
>>> remote server on pvz-dell(23128:loner):38042:2edf1c16:
>>> org.apache.geode.SerializationException: failed serializing object
>>> at org.apache.geode.cache.client.internal.OpExecutorImpl.handle
>>> Exception(OpExecutorImpl.java:669)
>>> at org.apache.geode.cache.client.internal.OpExecutorImpl.handle
>>> Exception(OpExecutorImpl.java:742)
>>> at org.apache.geode.cache.client.internal.OpExecutorImpl.handle
>>> Exception(OpExecutorImpl.java:611)
>>> at org.apache.geode.cache.client.internal.OpExecutorImpl.execut
>>> eOnServer(OpExecutorImpl.java:373)
>>> at org.apache.geode.cache.client.internal.OpExecutorImpl.execut
>>> eWithServerAffinity(OpExecutorImpl.java:220)
>>> at org.apache.geode.cache.client.internal.OpExecutorImpl.execut
>>> e(OpExecutorImpl.java:129)
>>> at org.apache.geode.cache.client.internal.OpExecutorImpl.execut
>>> e(OpExecutorImpl.java:116)
>>> at org.apache.geode.cache.client.internal.PoolImpl.execute(Pool
>>> Impl.java:774)
>>> at org.apache.geode.cache.client.internal.GetOp.execute(GetOp.java:91)
>>> at org.apache.geode.cache.client.internal.ServerRegionProxy.get
>>> (ServerRegionProxy.java:113)
>>> at org.apache.geode.internal.cache.tx.ClientTXRegionStub.findOb
>>> ject(ClientTXRegionStub.java:72)
>>> at org.apache.geode.internal.cache.TXStateStub.findObject(TXSta
>>> teStub.java:453)
>>> at org.apache.geode.internal.cache.TXStateProxyImpl.findObject(
>>> TXStateProxyImpl.java:496)
>>> at org.apache.geode.internal.cache.LocalRegion.get(LocalRegion.
>>> java:1366)
>>> at org.apache.geode.internal.cache.LocalRegion.get(LocalRegion.
>>> java:1300)
>>> at org.apache.geode.internal.cache.LocalRegion.get(LocalRegion.
>>> java:1285)
>>> at org.apache.geode.internal.cache.AbstractRegion.get(AbstractR
>>> egion.java:320)
>>> ......
>>> Caused by: org.apache.geode.SerializationException: failed serializing
>>> object
>>> at org.apache.geode.internal.cache.tier.sockets.Message.seriali
>>> zeAndAddPart(Message.java:399)
>>> at org.apache.geode.internal.cache.tier.sockets.Message.addPart
>>> InAnyForm(Message.java:360)
>>> at org.apache.geode.internal.cache.tier.sockets.command.Get70.w
>>> riteResponse(Get70.java:424)
>>> at org.apache.geode.internal.cache.tier.sockets.command.Get70.c
>>> mdExecute(Get70.java:211)
>>> at org.apache.geode.internal.cache.tier.sockets.BaseCommand.exe
>>> cute(BaseCommand.java:157)
>>> at org.apache.geode.internal.cache.tier.sockets.ServerConnectio
>>> n.doNormalMsg(ServerConnection.java:797)
>>> at org.apache.geode.internal.cache.tier.sockets.LegacyServerCon
>>> nection.doOneMessage(LegacyServerConnection.java:85)
>>> at org.apache.geode.internal.cache.tier.sockets.ServerConnectio
>>> n.run(ServerConnection.java:1148)
>>> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPool
>>> Executor.java:1149)
>>> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoo
>>> lExecutor.java:624)
>>> at org.apache.geode.internal.cache.tier.sockets.AcceptorImpl$4$
>>> 1.run(AcceptorImpl.java:641)
>>> at java.lang.Thread.run(Thread.java:748)
>>> *Caused by: java.io <http://java.io>.NotSerializableException:
>>> org.apache.geode.internal.cache.Token$NotAvailable*
>>> at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1184)
>>> at java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:348)
>>> at org.apache.geode.internal.InternalDataSerializer.writeSerial
>>> izableObject(InternalDataSerializer.java:2341)
>>> at org.apache.geode.internal.InternalDataSerializer.basicWriteO
>>> bject(InternalDataSerializer.java:2216)
>>> at org.apache.geode.DataSerializer.writeObject(DataSerializer.java:2936)
>>> at org.apache.geode.internal.util.BlobHelper.serializeTo(BlobHe
>>> lper.java:66)
>>> at org.apache.geode.internal.cache.tier.sockets.Message.seriali
>>> zeAndAddPart(Message.java:397)
>>>
>>>
>>> Kindly
>>> Pieter
>>>
>>
>>
>

Re: NotSerializableException: org.apache.geode.internal.cache.Token$NotAvailable after server restart

Posted by Dan Smith <ds...@pivotal.io>.
I created GEODE-5173 for this issue.

Thanks,
-Dan

On Wed, May 2, 2018 at 12:17 PM, Dan Smith <ds...@pivotal.io> wrote:

> Hi Pieter,
>
> I was able to reproduce this problem. It looks like it is an issue with
> doing a get inside of a transaction along with a replicated region using
> persistence and overflow. The value is still on disk, and for whatever
> reason if you do the get inside of a transaction it is returning you this
> bogus NOT_AVAILABLE token instead of reading the value off disk.
>
> I'll create a JIRA and attach my test. In the meantime, you could do the
> get outside of a transaction, or you could change your region to not use
> overflow. If you try changing the region to not use overflow, I think
> you'll also have to set the system property gemfire.disk.recoverValuesSync
> to true to make sure that in all cases you never have to read from disk.
>
> Thanks,
> -Dan
>
> On Mon, Apr 30, 2018 at 3:47 AM, Pieter van Zyl <pieter.van.zyl@lautus.net
> > wrote:
>
>> Good day.
>>
>> I am constantly seeing this error below when we stop and start Geode
>> server after a data import.
>>
>> When the client connects the second time after the restart we get NotSerializableException:
>> org.apache.geode.internal.cache.Token$NotAvailable
>>
>> Any ideas why we are getting this error or why it would state
>> "NotAvailable"?
>>
>> *Versions:*
>>
>> compile 'org.springframework.data:spring-data-geode:2.1.0.M2'
>> compile group: 'org.apache.geode', name: 'geode-core', version: '1.5.0'
>>
>> Trying to access this region on startup:
>>
>>
>>
>>
>>
>> *<gfe:replicated-region id="ClassID-ClassName-LookUp"
>>    disk-store-ref="tauDiskStore"                       persistent="true">
>>   <gfe:eviction type="HEAP_PERCENTAGE"
>> action="OVERFLOW_TO_DISK"/></gfe:replicated-region>*
>>
>> *Server config:*
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>> *<util:properties id="gemfire-props"><prop
>> key="log-level">info</prop><prop key="locators">pvz-dell[10334]</prop><prop
>> key="start-locator">pvz-dell[10334]</prop><prop
>> key="mcast-port">0</prop><prop key="http-service-port">0</prop><prop
>> key="jmx-manager">true</prop><prop key="jmx-manager-port">1099</prop><prop
>> key="jmx-manager-start">true</prop></util:properties>*
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>> *<gfe:cache properties-ref="gemfire-props"
>> pdx-serializer-ref="pdxSerializer"
>> pdx-persistent="true"pdx-disk-store="pdx-disk-store" /><gfe:cache-server
>> port="40404" max-connections="300" socket-buffer-size="65536"
>> max-threads="200"/><gfe:transaction-manager id="txManager"/><bean
>> id="pdxSerializer"
>> class="org.rdb.geode.mapping.RDBGeodeSerializer"> <constructor-arg
>> value="org.rdb.*,net.lautus.*"/></bean>*
>>
>> The server seems to be up and running
>> *Cache server connection listener bound to address
>> pvz-dell-/0:0:0:0:0:0:0:0:40404 with backlog 1,000.*
>>
>> *[info 2018/04/30 12:32:30.483 SAST <main> tid=0x1]
>> ClientHealthMonitorThread maximum allowed time between pings: 60,000*
>>
>> *[warn 2018/04/30 12:32:30.485 SAST <main> tid=0x1] Handshaker max Pool
>> size: 4*
>>
>> *[info 2018/04/30 12:32:30.486 SAST <Cache Server Selector
>> /0:0:0:0:0:0:0:0:40404 local port: 40404> tid=0x4f] SELECTOR enabled*
>>
>> *[info 2018/04/30 12:32:30.491 SAST <main> tid=0x1] CacheServer
>> Configuration:   port=40404 max-connections=300 max-threads=200
>> notify-by-subscription=true socket-buffer-size=65536
>> maximum-time-between-pings=60000 maximum-message-count=230000
>> message-time-to-live=180 eviction-policy=none capacity=1 overflow
>> directory=. groups=[] loadProbe=ConnectionCountProbe loadPollInterval=5000
>> tcpNoDelay=true*
>>
>> *server running on port 40404*
>> *Press <Enter> to terminate the server*
>>
>>
>> Exception in thread "main" org.apache.geode.cache.client.ServerOperationException:
>> remote server on pvz-dell(23128:loner):38042:2edf1c16:
>> org.apache.geode.SerializationException: failed serializing object
>> at org.apache.geode.cache.client.internal.OpExecutorImpl.handle
>> Exception(OpExecutorImpl.java:669)
>> at org.apache.geode.cache.client.internal.OpExecutorImpl.handle
>> Exception(OpExecutorImpl.java:742)
>> at org.apache.geode.cache.client.internal.OpExecutorImpl.handle
>> Exception(OpExecutorImpl.java:611)
>> at org.apache.geode.cache.client.internal.OpExecutorImpl.execut
>> eOnServer(OpExecutorImpl.java:373)
>> at org.apache.geode.cache.client.internal.OpExecutorImpl.execut
>> eWithServerAffinity(OpExecutorImpl.java:220)
>> at org.apache.geode.cache.client.internal.OpExecutorImpl.execut
>> e(OpExecutorImpl.java:129)
>> at org.apache.geode.cache.client.internal.OpExecutorImpl.execut
>> e(OpExecutorImpl.java:116)
>> at org.apache.geode.cache.client.internal.PoolImpl.execute(Pool
>> Impl.java:774)
>> at org.apache.geode.cache.client.internal.GetOp.execute(GetOp.java:91)
>> at org.apache.geode.cache.client.internal.ServerRegionProxy.get
>> (ServerRegionProxy.java:113)
>> at org.apache.geode.internal.cache.tx.ClientTXRegionStub.findOb
>> ject(ClientTXRegionStub.java:72)
>> at org.apache.geode.internal.cache.TXStateStub.findObject(TXSta
>> teStub.java:453)
>> at org.apache.geode.internal.cache.TXStateProxyImpl.findObject(
>> TXStateProxyImpl.java:496)
>> at org.apache.geode.internal.cache.LocalRegion.get(LocalRegion.java:1366)
>> at org.apache.geode.internal.cache.LocalRegion.get(LocalRegion.java:1300)
>> at org.apache.geode.internal.cache.LocalRegion.get(LocalRegion.java:1285)
>> at org.apache.geode.internal.cache.AbstractRegion.get(AbstractR
>> egion.java:320)
>> ......
>> Caused by: org.apache.geode.SerializationException: failed serializing
>> object
>> at org.apache.geode.internal.cache.tier.sockets.Message.seriali
>> zeAndAddPart(Message.java:399)
>> at org.apache.geode.internal.cache.tier.sockets.Message.addPart
>> InAnyForm(Message.java:360)
>> at org.apache.geode.internal.cache.tier.sockets.command.Get70.
>> writeResponse(Get70.java:424)
>> at org.apache.geode.internal.cache.tier.sockets.command.Get70.
>> cmdExecute(Get70.java:211)
>> at org.apache.geode.internal.cache.tier.sockets.BaseCommand.
>> execute(BaseCommand.java:157)
>> at org.apache.geode.internal.cache.tier.sockets.ServerConnectio
>> n.doNormalMsg(ServerConnection.java:797)
>> at org.apache.geode.internal.cache.tier.sockets.LegacyServerCon
>> nection.doOneMessage(LegacyServerConnection.java:85)
>> at org.apache.geode.internal.cache.tier.sockets.ServerConnectio
>> n.run(ServerConnection.java:1148)
>> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPool
>> Executor.java:1149)
>> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoo
>> lExecutor.java:624)
>> at org.apache.geode.internal.cache.tier.sockets.AcceptorImpl$4$
>> 1.run(AcceptorImpl.java:641)
>> at java.lang.Thread.run(Thread.java:748)
>> *Caused by: java.io <http://java.io>.NotSerializableException:
>> org.apache.geode.internal.cache.Token$NotAvailable*
>> at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1184)
>> at java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:348)
>> at org.apache.geode.internal.InternalDataSerializer.writeSerial
>> izableObject(InternalDataSerializer.java:2341)
>> at org.apache.geode.internal.InternalDataSerializer.basicWriteO
>> bject(InternalDataSerializer.java:2216)
>> at org.apache.geode.DataSerializer.writeObject(DataSerializer.java:2936)
>> at org.apache.geode.internal.util.BlobHelper.serializeTo(BlobHe
>> lper.java:66)
>> at org.apache.geode.internal.cache.tier.sockets.Message.seriali
>> zeAndAddPart(Message.java:397)
>>
>>
>> Kindly
>> Pieter
>>
>
>

Re: NotSerializableException: org.apache.geode.internal.cache.Token$NotAvailable after server restart

Posted by Dan Smith <ds...@pivotal.io>.
Hi Pieter,

I was able to reproduce this problem. It looks like it is an issue with
doing a get inside of a transaction along with a replicated region using
persistence and overflow. The value is still on disk, and for whatever
reason if you do the get inside of a transaction it is returning you this
bogus NOT_AVAILABLE token instead of reading the value off disk.

I'll create a JIRA and attach my test. In the meantime, you could do the
get outside of a transaction, or you could change your region to not use
overflow. If you try changing the region to not use overflow, I think
you'll also have to set the system property gemfire.disk.recoverValuesSync
to true to make sure that in all cases you never have to read from disk.

Thanks,
-Dan

On Mon, Apr 30, 2018 at 3:47 AM, Pieter van Zyl <pi...@lautus.net>
wrote:

> Good day.
>
> I am constantly seeing this error below when we stop and start Geode
> server after a data import.
>
> When the client connects the second time after the restart we get NotSerializableException:
> org.apache.geode.internal.cache.Token$NotAvailable
>
> Any ideas why we are getting this error or why it would state
> "NotAvailable"?
>
> *Versions:*
>
> compile 'org.springframework.data:spring-data-geode:2.1.0.M2'
> compile group: 'org.apache.geode', name: 'geode-core', version: '1.5.0'
>
> Trying to access this region on startup:
>
>
>
>
>
> *<gfe:replicated-region id="ClassID-ClassName-LookUp"
>  disk-store-ref="tauDiskStore"                       persistent="true">
> <gfe:eviction type="HEAP_PERCENTAGE"
> action="OVERFLOW_TO_DISK"/></gfe:replicated-region>*
>
> *Server config:*
>
>
>
>
>
>
>
>
>
>
>
> *<util:properties id="gemfire-props"><prop
> key="log-level">info</prop><prop key="locators">pvz-dell[10334]</prop><prop
> key="start-locator">pvz-dell[10334]</prop><prop
> key="mcast-port">0</prop><prop key="http-service-port">0</prop><prop
> key="jmx-manager">true</prop><prop key="jmx-manager-port">1099</prop><prop
> key="jmx-manager-start">true</prop></util:properties>*
>
>
>
>
>
>
>
>
>
>
> *<gfe:cache properties-ref="gemfire-props"
> pdx-serializer-ref="pdxSerializer"
> pdx-persistent="true"pdx-disk-store="pdx-disk-store" /><gfe:cache-server
> port="40404" max-connections="300" socket-buffer-size="65536"
> max-threads="200"/><gfe:transaction-manager id="txManager"/><bean
> id="pdxSerializer"
> class="org.rdb.geode.mapping.RDBGeodeSerializer"> <constructor-arg
> value="org.rdb.*,net.lautus.*"/></bean>*
>
> The server seems to be up and running
> *Cache server connection listener bound to address
> pvz-dell-/0:0:0:0:0:0:0:0:40404 with backlog 1,000.*
>
> *[info 2018/04/30 12:32:30.483 SAST <main> tid=0x1]
> ClientHealthMonitorThread maximum allowed time between pings: 60,000*
>
> *[warn 2018/04/30 12:32:30.485 SAST <main> tid=0x1] Handshaker max Pool
> size: 4*
>
> *[info 2018/04/30 12:32:30.486 SAST <Cache Server Selector
> /0:0:0:0:0:0:0:0:40404 local port: 40404> tid=0x4f] SELECTOR enabled*
>
> *[info 2018/04/30 12:32:30.491 SAST <main> tid=0x1] CacheServer
> Configuration:   port=40404 max-connections=300 max-threads=200
> notify-by-subscription=true socket-buffer-size=65536
> maximum-time-between-pings=60000 maximum-message-count=230000
> message-time-to-live=180 eviction-policy=none capacity=1 overflow
> directory=. groups=[] loadProbe=ConnectionCountProbe loadPollInterval=5000
> tcpNoDelay=true*
>
> *server running on port 40404*
> *Press <Enter> to terminate the server*
>
>
> Exception in thread "main" org.apache.geode.cache.client.ServerOperationException:
> remote server on pvz-dell(23128:loner):38042:2edf1c16: org.apache.geode.SerializationException:
> failed serializing object
> at org.apache.geode.cache.client.internal.OpExecutorImpl.handleException(
> OpExecutorImpl.java:669)
> at org.apache.geode.cache.client.internal.OpExecutorImpl.handleException(
> OpExecutorImpl.java:742)
> at org.apache.geode.cache.client.internal.OpExecutorImpl.handleException(
> OpExecutorImpl.java:611)
> at org.apache.geode.cache.client.internal.OpExecutorImpl.executeOnServer(
> OpExecutorImpl.java:373)
> at org.apache.geode.cache.client.internal.OpExecutorImpl.
> executeWithServerAffinity(OpExecutorImpl.java:220)
> at org.apache.geode.cache.client.internal.OpExecutorImpl.
> execute(OpExecutorImpl.java:129)
> at org.apache.geode.cache.client.internal.OpExecutorImpl.
> execute(OpExecutorImpl.java:116)
> at org.apache.geode.cache.client.internal.PoolImpl.execute(
> PoolImpl.java:774)
> at org.apache.geode.cache.client.internal.GetOp.execute(GetOp.java:91)
> at org.apache.geode.cache.client.internal.ServerRegionProxy.
> get(ServerRegionProxy.java:113)
> at org.apache.geode.internal.cache.tx.ClientTXRegionStub.
> findObject(ClientTXRegionStub.java:72)
> at org.apache.geode.internal.cache.TXStateStub.findObject(
> TXStateStub.java:453)
> at org.apache.geode.internal.cache.TXStateProxyImpl.
> findObject(TXStateProxyImpl.java:496)
> at org.apache.geode.internal.cache.LocalRegion.get(LocalRegion.java:1366)
> at org.apache.geode.internal.cache.LocalRegion.get(LocalRegion.java:1300)
> at org.apache.geode.internal.cache.LocalRegion.get(LocalRegion.java:1285)
> at org.apache.geode.internal.cache.AbstractRegion.get(
> AbstractRegion.java:320)
> ......
> Caused by: org.apache.geode.SerializationException: failed serializing
> object
> at org.apache.geode.internal.cache.tier.sockets.Message.
> serializeAndAddPart(Message.java:399)
> at org.apache.geode.internal.cache.tier.sockets.Message.
> addPartInAnyForm(Message.java:360)
> at org.apache.geode.internal.cache.tier.sockets.command.
> Get70.writeResponse(Get70.java:424)
> at org.apache.geode.internal.cache.tier.sockets.command.
> Get70.cmdExecute(Get70.java:211)
> at org.apache.geode.internal.cache.tier.sockets.BaseCommand.execute(
> BaseCommand.java:157)
> at org.apache.geode.internal.cache.tier.sockets.
> ServerConnection.doNormalMsg(ServerConnection.java:797)
> at org.apache.geode.internal.cache.tier.sockets.LegacyServerConnection.
> doOneMessage(LegacyServerConnection.java:85)
> at org.apache.geode.internal.cache.tier.sockets.ServerConnection.run(
> ServerConnection.java:1148)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(
> ThreadPoolExecutor.java:1149)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(
> ThreadPoolExecutor.java:624)
> at org.apache.geode.internal.cache.tier.sockets.AcceptorImpl$4$1.run(
> AcceptorImpl.java:641)
> at java.lang.Thread.run(Thread.java:748)
> *Caused by: java.io <http://java.io>.NotSerializableException:
> org.apache.geode.internal.cache.Token$NotAvailable*
> at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1184)
> at java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:348)
> at org.apache.geode.internal.InternalDataSerializer.
> writeSerializableObject(InternalDataSerializer.java:2341)
> at org.apache.geode.internal.InternalDataSerializer.basicWriteObject(
> InternalDataSerializer.java:2216)
> at org.apache.geode.DataSerializer.writeObject(DataSerializer.java:2936)
> at org.apache.geode.internal.util.BlobHelper.serializeTo(
> BlobHelper.java:66)
> at org.apache.geode.internal.cache.tier.sockets.Message.
> serializeAndAddPart(Message.java:397)
>
>
> Kindly
> Pieter
>

Re: NotSerializableException: org.apache.geode.internal.cache.Token$NotAvailable after server restart

Posted by Bruce Schuchardt <bs...@pivotal.io>.
Thanks for providing steps to reproduce the problem.

The cryptic 50200 refers to an old issue tracker database ticket.  The 
issue was that Token$NotAvailable was added to the source base and was 
java-serializable (ObjectOutputStream) but was not in previous versions 
and so could possibly break backward-compatibility and hence the ability 
to do a rolling upgrade.  The issue was closed when NotAvailable was 
modified to not be java-serializable.

I know that doesn't help your situation - we need to look into it 
further to figure out what's going wrong.



On 5/2/18 8:42 AM, Pieter van Zyl wrote:
> Hi Anthony
>
> Yes the domain classes are present in the server classpath. All seems 
> to work the first time. See steps below.
> I have not set read-serialized
>
> The steps are:
>
>  1. Start Geode server with empty database.
>  2. Startup application initializer to populate database with dummy data.
>  3. Startup Swing app.
>  4. Browse data. All 100%
>  5. Stop application.
>  6. Start application and browse again. All 100%
>
> But if I do the following
>
>  1. Start Geode server with empty database.
>  2. Startup application initializer to populate database with dummy data.
>  3. Startup Swing app.
>  4. Browse data. All 100%
>  5. Stop application. AND stop Geode server
>  6. Start Geode Database with dummy data.
>  7. Start application. This then fails with *NotSerializableException:
>     org.apache.geode.internal.cache.Token$NotAvailable*
>
>
> I am just wondering why it is trying to send *Token$NotAvailable back 
> to the client?*
> Could this be related tour our custom** RDBGeodeSerializer which 
> extends ReflectionBasedAutoSerializer?
>
> *
> *
> public static class Removed extends Token implements 
> *_DataSerializableFixedID_*, Serializable {
> *vs:*
> *
> *
> public static class NotAvailable extends Token {...
>   @Override public StringtoString() {
>      return "NOT_A_TOKEN"; }
>    // to fix bug 50200 no longer serializable }
> ...
>
> *What is bug 50200? I cannot find this bug on your Jira site.*
> *
> *
>
> Some of the client config:
>
> <bean id="pdxSerializer" 
> class="org.rdb.geode.mapping.RDBGeodeSerializer"> <constructor-arg 
> value="org.rdb.*,net.lautus.*"/> </bean> <util:properties 
> id="gemfire-props"> <prop key="log-level">debug</prop> </util:properties> <gfe:client-cache 
> properties-ref="gemfire-props" pdx-serializer-ref="pdxSerializer" 
> pool-name="pool"/> <gfe:transaction-manager/> <gfe:pool id="pool" 
> socket-buffer-size="65536" max-connections="300" 
> min-connections="200"> <gfe:locator host="pvz-dell.lautus.net 
> <http://pvz-dell.lautus.net>" port="10334"/> </gfe:pool> 
> <gfe:client-region id="org.rdb.internal.session.rootmap.RootMapHolder" 
> shortcut="CACHING_PROXY_HEAP_LRU" ignore-if-exists="true"> 
> <!--<gfe:cache-listener ref="cacheClientListener"/>--> 
> </gfe:client-region> <!--<bean id="cacheClientListener" 
> class="org.rdb.session.geode.LoggingClientCacheListener"/>--> 
> <gfe:client-region id="ClassName-ClassID-LookUp" 
> shortcut="CACHING_PROXY_HEAP_LRU" ignore-if-exists="true"/> 
> <gfe:client-region id="ClassID-ClassName-LookUp" 
> shortcut="CACHING_PROXY_HEAP_LRU" ignore-if-exists="true"/> 
> <gfe:client-region id="ClassCounter" shortcut="CACHING_PROXY_HEAP_LRU" 
> ignore-if-exists="true"/>
>
> Some extra debug
>
> *<<<<<GeodeDatabaseSession>>>>>*
> *[debug 2018/05/02 15:15:44.559 SAST <main> tid=0x1] setting up server 
> affinity*
> *
> *
> *[debug 2018/05/02 15:15:44.559 SAST <main> tid=0x1] Built a new 
> TXState: class 
> org.apache.geode.internal.cache.tx.ClientTXStateStub@673992096 target 
> node: null me:pvz-dell(13285:loner):45990:8485fe20*
> *
> *
> *[debug 2018/05/02 15:15:44.570 SAST <main> tid=0x1] constructing a 
> GetOp for key 0*
> *
> *
> *[debug 2018/05/02 15:15:44.570 SAST <main> tid=0x1] GetOp invoked for 
> key 0*
> *
> *
> *[debug 2018/05/02 15:15:44.570 SAST <main> tid=0x1] setting server 
> affinity to pvz-dell.lautus.net:40404 <http://pvz-dell.lautus.net:40404>*
> *
> *
> *Exception in thread "main" 
> org.apache.geode.cache.client.ServerOperationException: remote server 
> on pvz-dell(13285:loner):45990:8485fe20: 
> org.apache.geode.SerializationException: failed serializing object*
> *at 
> org.apache.geode.cache.client.internal.OpExecutorImpl.handleException(OpExecutorImpl.java:669)*
> *
> *
>
> Kindly
> Pieter
>
>
> On Tue, May 1, 2018 at 5:35 PM, Anthony Baker <abaker@pivotal.io 
> <ma...@pivotal.io>> wrote:
>
>     Hi Pieter!
>
>     Are your domain classes present on the server classpath?  Do you
>     know if ‘read-serialized’ was set or changed?
>
>     If you could provide a set of steps to reproduce this that would
>     be great.
>
>     Thanks,
>     Anthony
>
>
>
>>     On Apr 30, 2018, at 3:47 AM, Pieter van Zyl
>>     <pieter.van.zyl@lautus.net <ma...@lautus.net>> wrote:
>>
>>     Good day.
>>
>>     I am constantly seeing this error below when we stop and start
>>     Geode server after a data import.
>>
>>     When the client connects the second time after the restart we get
>>     NotSerializableException:
>>     org.apache.geode.internal.cache.Token$NotAvailable
>>
>>     Any ideas why we are getting this error or why it would state
>>     "NotAvailable"?
>>
>>     _Versions:_
>>
>>     compile 'org.springframework.data:spring-data-geode:2.1.0.M2'
>>     compile group: 'org.apache.geode', name: 'geode-core', version:
>>     '1.5.0'
>>
>>     Trying to access this region on startup:
>>
>>     /<gfe:replicated-region id="ClassID-ClassName-LookUp"
>>      disk-store-ref="tauDiskStore"
>>      persistent="true">
>>         <gfe:eviction type="HEAP_PERCENTAGE" action="OVERFLOW_TO_DISK"/>
>>     </gfe:replicated-region>/
>>
>>     _Server config:_
>>     /
>>     /
>>     /<util:properties id="gemfire-props">
>>     <prop key="log-level">info</prop>
>>     <prop key="locators">pvz-dell[10334]</prop>
>>     <prop key="start-locator">pvz-dell[10334]</prop>
>>     <prop key="mcast-port">0</prop>
>>     <prop key="http-service-port">0</prop>
>>
>>     <prop key="jmx-manager">true</prop>
>>     <prop key="jmx-manager-port">1099</prop>
>>     <prop key="jmx-manager-start">true</prop>
>>     </util:properties>/
>>     *<gfe:cache properties-ref="gemfire-props"
>>     pdx-serializer-ref="pdxSerializer" pdx-persistent="true"
>>     pdx-disk-store="pdx-disk-store" />
>>
>>     <gfe:cache-server port="40404" max-connections="300"
>>     socket-buffer-size="65536" max-threads="200"/>
>>
>>     <gfe:transaction-manager id="txManager"/>
>>
>>
>>     <bean id="pdxSerializer"
>>     class="org.rdb.geode.mapping.RDBGeodeSerializer">
>>     <constructor-arg value="org.rdb.*,net.lautus.*"/>
>>     </bean>*
>>
>>     The server seems to be up and running
>>     /*Cache server connection listener bound to address
>>     pvz-dell-/0:0:0:0:0:0:0:0:40404 with backlog 1,000.*/
>>     /*
>>     */
>>     /*[info 2018/04/30 12:32:30.483 SAST <main> tid=0x1]
>>     ClientHealthMonitorThread maximum allowed time between pings:
>>     60,000*/
>>     /*
>>     */
>>     /*[warn 2018/04/30 12:32:30.485 SAST <main> tid=0x1] Handshaker
>>     max Pool size: 4*/
>>     /*
>>     */
>>     /*[info 2018/04/30 12:32:30.486 SAST <Cache Server Selector
>>     /0:0:0:0:0:0:0:0:40404 local port: 40404> tid=0x4f] SELECTOR
>>     enabled*/
>>     /*
>>     */
>>     /*[info 2018/04/30 12:32:30.491 SAST <main> tid=0x1] CacheServer
>>     Configuration:   port=40404 max-connections=300 max-threads=200
>>     notify-by-subscription=true socket-buffer-size=65536
>>     maximum-time-between-pings=60000 maximum-message-count=230000
>>     message-time-to-live=180 eviction-policy=none capacity=1 overflow
>>     directory=. groups=[] loadProbe=ConnectionCountProbe
>>     loadPollInterval=5000 tcpNoDelay=true*/
>>     /*
>>     */
>>     /*server running on port 40404*/
>>     /*Press <Enter> to terminate the server*/
>>
>>
>>     Exception in thread "main"
>>     org.apache.geode.cache.client.ServerOperationException: remote
>>     server on pvz-dell(23128:loner):38042:2edf1c16:
>>     org.apache.geode.SerializationException: failed serializing object
>>     at
>>     org.apache.geode.cache.client.internal.OpExecutorImpl.handleException(OpExecutorImpl.java:669)
>>     at
>>     org.apache.geode.cache.client.internal.OpExecutorImpl.handleException(OpExecutorImpl.java:742)
>>     at
>>     org.apache.geode.cache.client.internal.OpExecutorImpl.handleException(OpExecutorImpl.java:611)
>>     at
>>     org.apache.geode.cache.client.internal.OpExecutorImpl.executeOnServer(OpExecutorImpl.java:373)
>>     at
>>     org.apache.geode.cache.client.internal.OpExecutorImpl.executeWithServerAffinity(OpExecutorImpl.java:220)
>>     at
>>     org.apache.geode.cache.client.internal.OpExecutorImpl.execute(OpExecutorImpl.java:129)
>>     at
>>     org.apache.geode.cache.client.internal.OpExecutorImpl.execute(OpExecutorImpl.java:116)
>>     at
>>     org.apache.geode.cache.client.internal.PoolImpl.execute(PoolImpl.java:774)
>>     at
>>     org.apache.geode.cache.client.internal.GetOp.execute(GetOp.java:91)
>>     at
>>     org.apache.geode.cache.client.internal.ServerRegionProxy.get(ServerRegionProxy.java:113)
>>     at
>>     org.apache.geode.internal.cache.tx.ClientTXRegionStub.findObject(ClientTXRegionStub.java:72)
>>     at
>>     org.apache.geode.internal.cache.TXStateStub.findObject(TXStateStub.java:453)
>>     at
>>     org.apache.geode.internal.cache.TXStateProxyImpl.findObject(TXStateProxyImpl.java:496)
>>     at
>>     org.apache.geode.internal.cache.LocalRegion.get(LocalRegion.java:1366)
>>     at
>>     org.apache.geode.internal.cache.LocalRegion.get(LocalRegion.java:1300)
>>     at
>>     org.apache.geode.internal.cache.LocalRegion.get(LocalRegion.java:1285)
>>     at
>>     org.apache.geode.internal.cache.AbstractRegion.get(AbstractRegion.java:320)
>>     ......
>>     Caused by: org.apache.geode.SerializationException: failed
>>     serializing object
>>     at
>>     org.apache.geode.internal.cache.tier.sockets.Message.serializeAndAddPart(Message.java:399)
>>     at
>>     org.apache.geode.internal.cache.tier.sockets.Message.addPartInAnyForm(Message.java:360)
>>     at
>>     org.apache.geode.internal.cache.tier.sockets.command.Get70.writeResponse(Get70.java:424)
>>     at
>>     org.apache.geode.internal.cache.tier.sockets.command.Get70.cmdExecute(Get70.java:211)
>>     at
>>     org.apache.geode.internal.cache.tier.sockets.BaseCommand.execute(BaseCommand.java:157)
>>     at
>>     org.apache.geode.internal.cache.tier.sockets.ServerConnection.doNormalMsg(ServerConnection.java:797)
>>     at
>>     org.apache.geode.internal.cache.tier.sockets.LegacyServerConnection.doOneMessage(LegacyServerConnection.java:85)
>>     at
>>     org.apache.geode.internal.cache.tier.sockets.ServerConnection.run(ServerConnection.java:1148)
>>     at
>>     java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>>     at
>>     java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>>     at
>>     org.apache.geode.internal.cache.tier.sockets.AcceptorImpl$4$1.run(AcceptorImpl.java:641)
>>     at java.lang.Thread.run(Thread.java:748)
>>     *Caused by: java.io <http://java.io>.NotSerializableException:
>>     org.apache.geode.internal.cache.Token$NotAvailable*
>>     at
>>     java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1184)
>>     at
>>     java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:348)
>>     at
>>     org.apache.geode.internal.InternalDataSerializer.writeSerializableObject(InternalDataSerializer.java:2341)
>>     at
>>     org.apache.geode.internal.InternalDataSerializer.basicWriteObject(InternalDataSerializer.java:2216)
>>     at
>>     org.apache.geode.DataSerializer.writeObject(DataSerializer.java:2936)
>>     at
>>     org.apache.geode.internal.util.BlobHelper.serializeTo(BlobHelper.java:66)
>>     at
>>     org.apache.geode.internal.cache.tier.sockets.Message.serializeAndAddPart(Message.java:397)
>>
>>
>>     Kindly
>>     Pieter
>
>


Re: NotSerializableException: org.apache.geode.internal.cache.Token$NotAvailable after server restart

Posted by Pieter van Zyl <pi...@lautus.net>.
Hi Anthony

Yes the domain classes are present in the server classpath. All seems to
work the first time. See steps below.
I have not set read-serialized

The steps are:

   1. Start Geode server with empty database.
   2. Startup application initializer to populate database with dummy data.
   3. Startup Swing app.
   4. Browse data. All 100%
   5. Stop application.
   6. Start application and browse again. All 100%

But if I do the following

   1. Start Geode server with empty database.
   2. Startup application initializer to populate database with dummy data.
   3. Startup Swing app.
   4. Browse data. All 100%
   5. Stop application. AND stop Geode server
   6. Start Geode Database with dummy data.
   7. Start application. This then fails with *NotSerializableException:
   org.apache.geode.internal.cache.Token$NotAvailable*


I am just wondering why it is trying to send *Token$NotAvailable back to
the client?*
Could this be related tour our custom  RDBGeodeSerializer which extends
ReflectionBasedAutoSerializer?


public static class Removed extends Token implements
*DataSerializableFixedID*, Serializable {
*vs:*

public static class NotAvailable extends Token {...

 @Override
  public String toString() {
    return "NOT_A_TOKEN";
  }
  // to fix bug 50200 no longer serializable
}

...

*What is bug 50200? I cannot find this bug on your Jira site.*


Some of the client config:

<bean id="pdxSerializer" class="org.rdb.geode.mapping.RDBGeodeSerializer">
    <constructor-arg value="org.rdb.*,net.lautus.*"/>
</bean>


<util:properties id="gemfire-props">
    <prop key="log-level">debug</prop>
</util:properties>

<gfe:client-cache properties-ref="gemfire-props"
pdx-serializer-ref="pdxSerializer" pool-name="pool"/>
<gfe:transaction-manager/>


<gfe:pool id="pool" socket-buffer-size="65536" max-connections="300"
min-connections="200">

    <gfe:locator host="pvz-dell.lautus.net" port="10334"/>

</gfe:pool>


<gfe:client-region id="org.rdb.internal.session.rootmap.RootMapHolder"
shortcut="CACHING_PROXY_HEAP_LRU"
                   ignore-if-exists="true">
    <!--<gfe:cache-listener ref="cacheClientListener"/>-->
</gfe:client-region>

<!--<bean id="cacheClientListener"
class="org.rdb.session.geode.LoggingClientCacheListener"/>-->

<gfe:client-region id="ClassName-ClassID-LookUp"
shortcut="CACHING_PROXY_HEAP_LRU"
                   ignore-if-exists="true"/>
<gfe:client-region id="ClassID-ClassName-LookUp"
shortcut="CACHING_PROXY_HEAP_LRU"
                   ignore-if-exists="true"/>
<gfe:client-region id="ClassCounter" shortcut="CACHING_PROXY_HEAP_LRU"
                   ignore-if-exists="true"/>


Some extra debug

*<<<<<GeodeDatabaseSession>>>>>*
*[debug 2018/05/02 15:15:44.559 SAST <main> tid=0x1] setting up server
affinity*

*[debug 2018/05/02 15:15:44.559 SAST <main> tid=0x1] Built a new TXState:
class org.apache.geode.internal.cache.tx.ClientTXStateStub@673992096 target
node: null me:pvz-dell(13285:loner):45990:8485fe20*

*[debug 2018/05/02 15:15:44.570 SAST <main> tid=0x1] constructing a GetOp
for key 0*

*[debug 2018/05/02 15:15:44.570 SAST <main> tid=0x1] GetOp invoked for key
0*

*[debug 2018/05/02 15:15:44.570 SAST <main> tid=0x1] setting server
affinity to pvz-dell.lautus.net:40404 <http://pvz-dell.lautus.net:40404>*

*Exception in thread "main"
org.apache.geode.cache.client.ServerOperationException: remote server on
pvz-dell(13285:loner):45990:8485fe20:
org.apache.geode.SerializationException: failed serializing object*
* at
org.apache.geode.cache.client.internal.OpExecutorImpl.handleException(OpExecutorImpl.java:669)*


Kindly
Pieter


On Tue, May 1, 2018 at 5:35 PM, Anthony Baker <ab...@pivotal.io> wrote:

> Hi Pieter!
>
> Are your domain classes present on the server classpath?  Do you know if
> ‘read-serialized’ was set or changed?
>
> If you could provide a set of steps to reproduce this that would be great.
>
> Thanks,
> Anthony
>
>
>
> On Apr 30, 2018, at 3:47 AM, Pieter van Zyl <pi...@lautus.net>
> wrote:
>
> Good day.
>
> I am constantly seeing this error below when we stop and start Geode
> server after a data import.
>
> When the client connects the second time after the restart we get NotSerializableException:
> org.apache.geode.internal.cache.Token$NotAvailable
>
> Any ideas why we are getting this error or why it would state
> "NotAvailable"?
>
> *Versions:*
>
> compile 'org.springframework.data:spring-data-geode:2.1.0.M2'
> compile group: 'org.apache.geode', name: 'geode-core', version: '1.5.0'
>
> Trying to access this region on startup:
>
>
>
>
>
> *<gfe:replicated-region id="ClassID-ClassName-LookUp"
>  disk-store-ref="tauDiskStore"                       persistent="true">
> <gfe:eviction type="HEAP_PERCENTAGE"
> action="OVERFLOW_TO_DISK"/></gfe:replicated-region>*
>
> *Server config:*
>
>
>
>
>
>
>
>
>
>
>
> *<util:properties id="gemfire-props"><prop
> key="log-level">info</prop><prop key="locators">pvz-dell[10334]</prop><prop
> key="start-locator">pvz-dell[10334]</prop><prop
> key="mcast-port">0</prop><prop key="http-service-port">0</prop><prop
> key="jmx-manager">true</prop><prop key="jmx-manager-port">1099</prop><prop
> key="jmx-manager-start">true</prop></util:properties>*
>
>
>
>
>
>
>
>
>
>
> *<gfe:cache properties-ref="gemfire-props"
> pdx-serializer-ref="pdxSerializer"
> pdx-persistent="true"pdx-disk-store="pdx-disk-store" /><gfe:cache-server
> port="40404" max-connections="300" socket-buffer-size="65536"
> max-threads="200"/><gfe:transaction-manager id="txManager"/><bean
> id="pdxSerializer"
> class="org.rdb.geode.mapping.RDBGeodeSerializer"> <constructor-arg
> value="org.rdb.*,net.lautus.*"/></bean>*
>
> The server seems to be up and running
> *Cache server connection listener bound to address
> pvz-dell-/0:0:0:0:0:0:0:0:40404 with backlog 1,000.*
>
> *[info 2018/04/30 12:32:30.483 SAST <main> tid=0x1]
> ClientHealthMonitorThread maximum allowed time between pings: 60,000*
>
> *[warn 2018/04/30 12:32:30.485 SAST <main> tid=0x1] Handshaker max Pool
> size: 4*
>
> *[info 2018/04/30 12:32:30.486 SAST <Cache Server Selector
> /0:0:0:0:0:0:0:0:40404 local port: 40404> tid=0x4f] SELECTOR enabled*
>
> *[info 2018/04/30 12:32:30.491 SAST <main> tid=0x1] CacheServer
> Configuration:   port=40404 max-connections=300 max-threads=200
> notify-by-subscription=true socket-buffer-size=65536
> maximum-time-between-pings=60000 maximum-message-count=230000
> message-time-to-live=180 eviction-policy=none capacity=1 overflow
> directory=. groups=[] loadProbe=ConnectionCountProbe loadPollInterval=5000
> tcpNoDelay=true*
>
> *server running on port 40404*
> *Press <Enter> to terminate the server*
>
>
> Exception in thread "main" org.apache.geode.cache.client.ServerOperationException:
> remote server on pvz-dell(23128:loner):38042:2edf1c16: org.apache.geode.SerializationException:
> failed serializing object
> at org.apache.geode.cache.client.internal.OpExecutorImpl.handleException(
> OpExecutorImpl.java:669)
> at org.apache.geode.cache.client.internal.OpExecutorImpl.handleException(
> OpExecutorImpl.java:742)
> at org.apache.geode.cache.client.internal.OpExecutorImpl.handleException(
> OpExecutorImpl.java:611)
> at org.apache.geode.cache.client.internal.OpExecutorImpl.executeOnServer(
> OpExecutorImpl.java:373)
> at org.apache.geode.cache.client.internal.OpExecutorImpl.
> executeWithServerAffinity(OpExecutorImpl.java:220)
> at org.apache.geode.cache.client.internal.OpExecutorImpl.
> execute(OpExecutorImpl.java:129)
> at org.apache.geode.cache.client.internal.OpExecutorImpl.
> execute(OpExecutorImpl.java:116)
> at org.apache.geode.cache.client.internal.PoolImpl.execute(
> PoolImpl.java:774)
> at org.apache.geode.cache.client.internal.GetOp.execute(GetOp.java:91)
> at org.apache.geode.cache.client.internal.ServerRegionProxy.
> get(ServerRegionProxy.java:113)
> at org.apache.geode.internal.cache.tx.ClientTXRegionStub.
> findObject(ClientTXRegionStub.java:72)
> at org.apache.geode.internal.cache.TXStateStub.findObject(
> TXStateStub.java:453)
> at org.apache.geode.internal.cache.TXStateProxyImpl.
> findObject(TXStateProxyImpl.java:496)
> at org.apache.geode.internal.cache.LocalRegion.get(LocalRegion.java:1366)
> at org.apache.geode.internal.cache.LocalRegion.get(LocalRegion.java:1300)
> at org.apache.geode.internal.cache.LocalRegion.get(LocalRegion.java:1285)
> at org.apache.geode.internal.cache.AbstractRegion.get(
> AbstractRegion.java:320)
> ......
> Caused by: org.apache.geode.SerializationException: failed serializing
> object
> at org.apache.geode.internal.cache.tier.sockets.Message.
> serializeAndAddPart(Message.java:399)
> at org.apache.geode.internal.cache.tier.sockets.Message.
> addPartInAnyForm(Message.java:360)
> at org.apache.geode.internal.cache.tier.sockets.command.
> Get70.writeResponse(Get70.java:424)
> at org.apache.geode.internal.cache.tier.sockets.command.
> Get70.cmdExecute(Get70.java:211)
> at org.apache.geode.internal.cache.tier.sockets.BaseCommand.execute(
> BaseCommand.java:157)
> at org.apache.geode.internal.cache.tier.sockets.
> ServerConnection.doNormalMsg(ServerConnection.java:797)
> at org.apache.geode.internal.cache.tier.sockets.LegacyServerConnection.
> doOneMessage(LegacyServerConnection.java:85)
> at org.apache.geode.internal.cache.tier.sockets.ServerConnection.run(
> ServerConnection.java:1148)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(
> ThreadPoolExecutor.java:1149)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(
> ThreadPoolExecutor.java:624)
> at org.apache.geode.internal.cache.tier.sockets.AcceptorImpl$4$1.run(
> AcceptorImpl.java:641)
> at java.lang.Thread.run(Thread.java:748)
> *Caused by: java.io <http://java.io>.NotSerializableException:
> org.apache.geode.internal.cache.Token$NotAvailable*
> at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1184)
> at java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:348)
> at org.apache.geode.internal.InternalDataSerializer.
> writeSerializableObject(InternalDataSerializer.java:2341)
> at org.apache.geode.internal.InternalDataSerializer.basicWriteObject(
> InternalDataSerializer.java:2216)
> at org.apache.geode.DataSerializer.writeObject(DataSerializer.java:2936)
> at org.apache.geode.internal.util.BlobHelper.serializeTo(
> BlobHelper.java:66)
> at org.apache.geode.internal.cache.tier.sockets.Message.
> serializeAndAddPart(Message.java:397)
>
>
> Kindly
> Pieter
>
>
>

Re: NotSerializableException: org.apache.geode.internal.cache.Token$NotAvailable after server restart

Posted by Anthony Baker <ab...@pivotal.io>.
Hi Pieter!

Are your domain classes present on the server classpath?  Do you know if ‘read-serialized’ was set or changed?

If you could provide a set of steps to reproduce this that would be great.

Thanks,
Anthony



> On Apr 30, 2018, at 3:47 AM, Pieter van Zyl <pi...@lautus.net> wrote:
> 
> Good day.
> 
> I am constantly seeing this error below when we stop and start Geode server after a data import.
> 
> When the client connects the second time after the restart we get NotSerializableException: org.apache.geode.internal.cache.Token$NotAvailable
> 
> Any ideas why we are getting this error or why it would state "NotAvailable"?
> 
> Versions:
> 
> compile 'org.springframework.data:spring-data-geode:2.1.0.M2'
> compile group: 'org.apache.geode', name: 'geode-core', version: '1.5.0'
> 
> Trying to access this region on startup:
> 
> <gfe:replicated-region id="ClassID-ClassName-LookUp"
>                        disk-store-ref="tauDiskStore"
>                        persistent="true">
>     <gfe:eviction type="HEAP_PERCENTAGE" action="OVERFLOW_TO_DISK"/>
> </gfe:replicated-region>
> 
> Server config:
> 
> <util:properties id="gemfire-props">
> <prop key="log-level">info</prop>
> <prop key="locators">pvz-dell[10334]</prop>
> <prop key="start-locator">pvz-dell[10334]</prop>
> <prop key="mcast-port">0</prop>
> <prop key="http-service-port">0</prop>
> 
> <prop key="jmx-manager">true</prop>
> <prop key="jmx-manager-port">1099</prop>
> <prop key="jmx-manager-start">true</prop>
> </util:properties>
> <gfe:cache properties-ref="gemfire-props" pdx-serializer-ref="pdxSerializer" pdx-persistent="true"
> pdx-disk-store="pdx-disk-store" />
> 
> <gfe:cache-server port="40404" max-connections="300" socket-buffer-size="65536" max-threads="200"/>
> 
> <gfe:transaction-manager id="txManager"/>
> 
> 
> <bean id="pdxSerializer" class="org.rdb.geode.mapping.RDBGeodeSerializer"> 
> <constructor-arg value="org.rdb.*,net.lautus.*"/>
> </bean>
> 
> The server seems to be up and running
> Cache server connection listener bound to address pvz-dell-/0:0:0:0:0:0:0:0:40404 with backlog 1,000.
> 
> [info 2018/04/30 12:32:30.483 SAST <main> tid=0x1] ClientHealthMonitorThread maximum allowed time between pings: 60,000
> 
> [warn 2018/04/30 12:32:30.485 SAST <main> tid=0x1] Handshaker max Pool size: 4
> 
> [info 2018/04/30 12:32:30.486 SAST <Cache Server Selector /0:0:0:0:0:0:0:0:40404 local port: 40404> tid=0x4f] SELECTOR enabled
> 
> [info 2018/04/30 12:32:30.491 SAST <main> tid=0x1] CacheServer Configuration:   port=40404 max-connections=300 max-threads=200 notify-by-subscription=true socket-buffer-size=65536 maximum-time-between-pings=60000 maximum-message-count=230000 message-time-to-live=180 eviction-policy=none capacity=1 overflow directory=. groups=[] loadProbe=ConnectionCountProbe loadPollInterval=5000 tcpNoDelay=true
> 
> server running on port 40404
> Press <Enter> to terminate the server
> 
> 
> Exception in thread "main" org.apache.geode.cache.client.ServerOperationException: remote server on pvz-dell(23128:loner):38042:2edf1c16: org.apache.geode.SerializationException: failed serializing object
> 	at org.apache.geode.cache.client.internal.OpExecutorImpl.handleException(OpExecutorImpl.java:669)
> 	at org.apache.geode.cache.client.internal.OpExecutorImpl.handleException(OpExecutorImpl.java:742)
> 	at org.apache.geode.cache.client.internal.OpExecutorImpl.handleException(OpExecutorImpl.java:611)
> 	at org.apache.geode.cache.client.internal.OpExecutorImpl.executeOnServer(OpExecutorImpl.java:373)
> 	at org.apache.geode.cache.client.internal.OpExecutorImpl.executeWithServerAffinity(OpExecutorImpl.java:220)
> 	at org.apache.geode.cache.client.internal.OpExecutorImpl.execute(OpExecutorImpl.java:129)
> 	at org.apache.geode.cache.client.internal.OpExecutorImpl.execute(OpExecutorImpl.java:116)
> 	at org.apache.geode.cache.client.internal.PoolImpl.execute(PoolImpl.java:774)
> 	at org.apache.geode.cache.client.internal.GetOp.execute(GetOp.java:91)
> 	at org.apache.geode.cache.client.internal.ServerRegionProxy.get(ServerRegionProxy.java:113)
> 	at org.apache.geode.internal.cache.tx.ClientTXRegionStub.findObject(ClientTXRegionStub.java:72)
> 	at org.apache.geode.internal.cache.TXStateStub.findObject(TXStateStub.java:453)
> 	at org.apache.geode.internal.cache.TXStateProxyImpl.findObject(TXStateProxyImpl.java:496)
> 	at org.apache.geode.internal.cache.LocalRegion.get(LocalRegion.java:1366)
> 	at org.apache.geode.internal.cache.LocalRegion.get(LocalRegion.java:1300)
> 	at org.apache.geode.internal.cache.LocalRegion.get(LocalRegion.java:1285)
> 	at org.apache.geode.internal.cache.AbstractRegion.get(AbstractRegion.java:320)
> 	......
> Caused by: org.apache.geode.SerializationException: failed serializing object
> 	at org.apache.geode.internal.cache.tier.sockets.Message.serializeAndAddPart(Message.java:399)
> 	at org.apache.geode.internal.cache.tier.sockets.Message.addPartInAnyForm(Message.java:360)
> 	at org.apache.geode.internal.cache.tier.sockets.command.Get70.writeResponse(Get70.java:424)
> 	at org.apache.geode.internal.cache.tier.sockets.command.Get70.cmdExecute(Get70.java:211)
> 	at org.apache.geode.internal.cache.tier.sockets.BaseCommand.execute(BaseCommand.java:157)
> 	at org.apache.geode.internal.cache.tier.sockets.ServerConnection.doNormalMsg(ServerConnection.java:797)
> 	at org.apache.geode.internal.cache.tier.sockets.LegacyServerConnection.doOneMessage(LegacyServerConnection.java:85)
> 	at org.apache.geode.internal.cache.tier.sockets.ServerConnection.run(ServerConnection.java:1148)
> 	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> 	at org.apache.geode.internal.cache.tier.sockets.AcceptorImpl$4$1.run(AcceptorImpl.java:641)
> 	at java.lang.Thread.run(Thread.java:748)
> Caused by: java.io.NotSerializableException: org.apache.geode.internal.cache.Token$NotAvailable
> 	at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1184)
> 	at java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:348)
> 	at org.apache.geode.internal.InternalDataSerializer.writeSerializableObject(InternalDataSerializer.java:2341)
> 	at org.apache.geode.internal.InternalDataSerializer.basicWriteObject(InternalDataSerializer.java:2216)
> 	at org.apache.geode.DataSerializer.writeObject(DataSerializer.java:2936)
> 	at org.apache.geode.internal.util.BlobHelper.serializeTo(BlobHelper.java:66)
> 	at org.apache.geode.internal.cache.tier.sockets.Message.serializeAndAddPart(Message.java:397)
> 
> 
> Kindly
> Pieter